good morning
good morning
[Applause]
welcome to google i o it’s a beautiful
day i think warmer than last year hope
you’re all enjoying it thank you for
joining us
i think we have over 7 000 people here
today
as well as
many many people we are live streaming
this to many locations around the world
so thank you all for joining us today we
have a lot to cover
but before we get started uh you know i
had one important business which i
wanted to get over with
towards the end of last year it came to
my attention that we had a major bug in
one of our core products
it turns out
we got the cheese strong in our burger
emoji
anyway we went hard to work i never knew
so many people cared about whether jesus
we fixed it you know the irony of the
whole thing is i’m a vegetarian in the
first place
[Applause]
so we fixed it
but i hopefully we got the cheese right
but as we were working on this this came
to my attention
i
i didn’t i don’t even want to tell you
the explanation the team gave me as to
why the foam is floating about the beer
but we restored the natural laws of
physics
so all is well we can get back to
business
we can talk about all the progress since
last year’s io
i’m sure all of you would agree it’s
been an extraordinary year on many
fronts i’m sure you’ve all felt it
we’re at an important inflection point
in computing
and it’s exciting to be driving
technology forward
and it’s made us even more reflective
about our responsibilities
expectations for technology very greatly
depending on where you are in the world
or what opportunities are available to
you
for someone like me who grew up without
a phone
i can distinctly remember
how gaining access to technology can
make a difference in your lives
and we we see this in the work we do
around the world
you see it
when someone gets access to a smartphone
for the first time
and you can feel it in the huge demand
for digital skills we see
that’s why we’ve been so focused on
bringing digital skills to communities
around the world
so far we have trained over 25 million
people
and we expect that number to rise over
60 million in the next five years
it’s clear technology can be a positive
force
but it’s equally clear that we just
can’t be wide-eyed about the innovations
technology creates
there are very real and important
questions
being raised about the impact of these
advances and the role they’ll play in
our lives
so we know the path ahead needs to be
navigated carefully
and deliberately
and we feel a deep sense of
responsibility to get this right
that’s the spirit with which we are
approaching our core mission
to make information more useful
accessible and beneficial to society
i’ve always felt that we were fortunate
as a company to have a timeless mission
that feels as relevant today as when we
started
and we’re excited about how we can
approach our mission with renewed vigor
thanks to the progress we see in ai
ai is enabling this to for us to do this
in new ways
solving problems for our users around
the world
last year at google i o we announced
google ai
it’s a collection of our teams and
efforts to bring the benefits of ai to
everyone
and we want this to work globally so we
are opening ai centers around the world
ai is going to impact many many fields
and i want to give you a couple of
examples today
healthcare is one of the most important
fields ai is going to transform
last year we announced our work on
diabetic retinopathy which is a leading
cause of blindness and we use deep
learning to help doctors diagnose it
earlier
and we’ve been running field trials
since then at arvind and sankara
hospitals in india and the field trials
are going really well
we are bringing expert diagnosis to
places where trained doctors are scarce
it turned out using the same retinal
scans
there were things which humans quite
didn’t know to look for
but our ai systems offered more insights
your same eye scan turns out holds
information
with which
we can predict
the five-year risk of you having an
adverse cardiovascular event
heart attack or strokes
so to me the interesting thing is that
you know more than what doctors could
find in these eye scans the machine
learning systems offer newer insights
this could be the basis for a new
non-invasive way to detect
cardiovascular risk and we are working
we just published the research and we
are going to be working to bring this to
field trials with our partners
another area where ai can help is to
actually help doctors predict medical
events turns out doctors have a lot of
difficult decisions to make
and for them getting advance notice
say 24 to 48 hours before a patient is
likely to get very sick has a tremendous
difference in the outcome
and so we put our machine learning
systems to work we’ve been working with
our partners using the identified
medical records
and it turns out if you go and analyze
over 100 000 data points per patient
more than any single doctor could
analyze
we can actually quantitatively predict
the chance of readmission
24 to 48 hours before earlier than
traditional methods it gives doctors
more time to act
we are publishing our paper on this
later today and we are looking forward
to partnering with hospitals and medical
institutions
another area where ai can help is
accessibility
you know we can make day-to-day use
cases much easier for people
let’s take a common use case you know
you you come back home in the night and
you turn your tv on it’s not that
uncommon to see two people passionately
two or more people passionately talking
over each other
imagine if you are hearing impaired and
you’re relying on closed captioning to
understand what’s going on this is how
it looks to you
on a danny ainge level but he’s above an
angelo level in other words
enough
as you can see it’s gibberish you can’t
make sense of what’s going on
so we have machine learning technology
called looking to listen it not only
looks for audio cues
but combines it with visual cues to
clearly disambiguate the two voices
let’s see how that can work maybe in
youtube
it’s not on a danny ainge level but he’s
above a calendula level in other words
he understands enough just
you said you said it was all right to
lose all the purpose you said it’s all
right to lose on purpose and advertise
that to the fence it’s perfectly okay
you said it’s okay we have nothing else
to talk about
we have a lot to talk about
but you can see how we can put
technology to work to make an important
day-to-day use case profoundly better
you know the great thing about
technology is it’s constantly evolving
in fact we can even apply machine
learning to a 200 year old technology
morse code and make an impact in
someone’s quality of life
it’s great to reinvent products with ai
gboard is actually a great example of it
every single day we offer users and
users choose over 8 billion auto
corrections each and every day
another example of a one of our core
products
which we are redesigning with ai is
gmail we just had a new fresher look for
gmail a recent redesign hope you’re all
enjoying using it we are bringing
another feature to gmail
we call it smart compose
so it’s the name suggest
we use machine learning
to start suggesting phrases for you as
you type
all you need to do is to hit tab
and keep order completing
in this case it understands the subject
is talker tuesday it suggests
chips salsa guacamole
it it takes care of mundane things like
addresses
so that you don’t need to worry about it
you can actually focus on what you want
to type
i’ve been loving using it i’ve been
sending a lot more emails to the company
not sure what the company thinks of it
but it’s been great we are rolling out
smart compose to all our users this
month and hope you enjoy using it as
well
another product which we built from the
ground up using ai
is google photos
works amazingly well and at scale
you know if you click on one of these
photos what we call the photo viewer
experience where you’re looking at one
photo at a time
so that you understand the scale
every single day there are over 5
billion photos viewed by our users each
and every day
so we want to use ai to help in those
moments
so we are bringing a new feature called
suggested actions
essentially suggesting smart actions
right in context for you to act on
say for example
you went to a wedding and you’re looking
through those pictures we understand
your friend lisa is in the picture and
we offer to share the three photos with
lisa
and with one click those photos can be
sent to her
so the anxiety where everyone is trying
to get the picture on their phone i
think we can make that better
say for example if the photo in the same
wedding if the photos are underexposed
our ai systems offer a suggestion
to fix the brightness right there
one tab and we can fix the brightness
for you
or if you took a picture of a document
which you want to save for later we can
recognize
convert the document to pdf
and make it
make it much easier for you to use later
you know we want to make all these
simple cases delightful
by the way ai can also deliver
unexpected moments
so for example if you have this picture
cute picture of your kid
we can make it better we can drop the
background color pop the
color and make the kid even cuter
or if you happen to have a very special
memory something in black and white
maybe of your mother and grandmother
we can recreate that moment in color
and and make that moment even more real
and special
[Applause]
all these features are going to be
rolling out to google photos users in
the next couple of months
the reason we are able to do this
is because for a while we’ve been
investing in the scale of our
computational architecture
this is why last year we talked about
our tensor processing units
these are special purpose machine
learning chips
these are driving all the product
improvements you’re seeing today and
we’ve made it available to our cloud
customers
since the last year we’ve been hard at
work and today i’m excited to announce
our next generation tpu 3.0
these chips are so powerful that for the
first time
we’ve had to introduce liquid cooling in
our data centers
and we put these chips in the form of
giant parts
each of these parts is now atex more
powerful than last year’s well over 100
petaflops and this is what allows us to
develop better models
larger models more accurate models
and helps us tackle
even bigger problems
and one of the biggest problems we are
tackling with ai is the google assistant
our vision for the perfect assistant is
that it’s naturally conversational it’s
there when you need it so that you can
get things done in the real world and we
are working to make it even better
we want the assistant to be something
that’s natural and comfortable to talk
to
and to do that we need to start with the
foundation of the google assistant the
voice
today that’s how most users interact
with the assistant
our current voice is codenamed holly she
was a real person
she spent months in our studio
and then we stitched those recordings
together to create voice
but 18 months ago we announced a
breakthrough from our deepmind team
called wavenet
unlike the current systems
wavenet actually models the underlying
raw audio to create a more natural voice
it’s closer to how humans speak
the pitch the pace
even all the pauses that convey meaning
we want to get all of that right
so we’ve worked hard with wavenet
and we are adding as of today
six new voices to the google assistant
let’s have them say hello
good morning everyone
i’m your google assistant
welcome to shoreline
amphitheater we hope you’ll enjoy google
i o
back to you soon dar
you know our goal is one day to get the
right accents languages and dialects
right globally
you know wavenet can make this much
easier
with this technology we started
wondering
who we could get into the studio
with an amazing voice
take a look
couscous a type of north african
semolina in granules made from crushed
durum wheat
[Music]
i want a puppy with sweet eyes and a
fluffy tail who likes my haikus don’t we
all
happy birthday to the person whose
birthday it is
happy birthday
to you
john legend
he would probably tell you he don’t want
to brag
but he’ll be the best assistant you ever
had can you tell me where you live
you can find me on all kinds of devices
phones google homes and if i’m lucky
in your heart
that’s right john legend’s voice is
coming to the assistant
clearly he didn’t spend all the time in
the studio answering every possible
question that you could ask
but wavenet allowed us to shorten the
studio time and the model can actually
capture the richness of his voice
his voice will be coming later this year
in certain contexts so that you can get
responses like this
good morning sundar right now in
mountain view it’s 65 with clear skies
today it’s predicted to be 75 degrees
and sunny at 10 am you have an event
called google i o keynote then at 1 pm
you have margaritas have a wonderful day
i’m looking forward to 1 pm
[Music]
so john’s voice is coming later this
year i’m really excited we can drive
advances like this with ai we are doing
a lot more with the google assistant and
to talk to you a little bit more about
it let me invite scott onto the stage