Artificial intelligence in healthcare: opportunities and challenges | Navid Toosi Saidy | TEDxQUT

Artificial intelligence in healthcare: opportunities and challenges | Navid Toosi Saidy | TEDxQUT

TEDx Talks

2 года назад

136,497 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

Krida Tri Wahyuli
Krida Tri Wahyuli - 01.10.2023 10:31

Actually I want to join for this project especially to increase the health rate of cancer patient with minimize the duration of diagnosis process

Ответить
Roboreception - AI Call Answering Dental Offices
Roboreception - AI Call Answering Dental Offices - 24.08.2023 21:01

The depiction of AI in popular culture has often been one of dystopian futures, where machines rise against humanity. However, as the speaker rightly points out, the reality is far from this portrayal. AI has the potential to revolutionize healthcare, offering personalized care, streamlining hospital operations, and providing accurate decision-making tools. The example of AI's role in cancer diagnosis and treatment is particularly poignant. By consolidating data from various sources, AI can provide accurate predictions about a patient's diagnosis, treatment options, and prognosis. This is a game-changer, especially for patients like Peter, who, without AI's intervention, might have faced a grim prognosis.

However, the journey of integrating AI into healthcare is not without its challenges. One of the most significant hurdles is the existing regulatory framework, which is not designed to accommodate the dynamic nature of AI. Traditional software is static, producing the same output for the same data. In contrast, AI has the intrinsic ability to learn and evolve, making it more adaptable and, ideally, more intelligent over time. Locking the learning potential of AI models, as the current regulatory approach suggests, limits their potential and can even be detrimental to patient care.

Furthermore, the issue of data bias is critical. If AI models are trained predominantly on data from one demographic, their accuracy and reliability can diminish for other demographics. It's essential for AI developers to ensure their models are trained on diverse datasets. However, as the speaker mentions, this isn't always feasible due to the availability of data. Therefore, building a functionality where AI models can acknowledge their limitations and uncertainties is crucial.

In conclusion, the potential of AI in healthcare is immense. However, to harness this potential fully, we need to address the challenges head-on. This involves establishing new regulatory frameworks in collaboration with AI developers, healthcare practitioners, policy advisers, and patients. By doing so, we can ensure that AI serves the entire population equally, leading to a future where healthcare is more personalized, efficient, and effective.

Ответить
Hans Kraut
Hans Kraut - 03.06.2023 15:45

Long overdue. Use AI to predict and medication dosage and meds (adhd e.g.) by past response.

Whatsapp then the patient simply exports the log as a txt and a LLM does analysis and etc just use GPT4 and so on (dont even have to fine-tune but it might help) as long it lerns and can be reused or benefitial for future models

Ответить
ZenSound Sarah
ZenSound Sarah - 13.05.2023 05:57

I'm writing a persuasive on the benefits of using AI in healthcare. This was very helpful in looking at the benefits and idea of how helpful using an AI can be in healthcare. It's not entirely offsetting the role of decision making from physicians it's a great tool.

Ответить
Wellnessmatters48
Wellnessmatters48 - 26.04.2023 08:27

Artificial intelligence has some amazing potential benefits in the health care field, with potential efficiency improvements for hospitals, assisting and guiding physicians in patient treatment regimens, as well as the greatest potential of diagnosing a patient. Dr. Navid Saidy discussed some very important complications to consider related to introducing artificial intelligence into the medical practice. Including regulations for medical devices that typically involves a physical device. Yet in the case of artificial intelligence it is a software that evolves and does not involve a static repetitive outcome. Nevertheless if artificial intelligence purpose is to diagnose, give treatment options and prognosis, it’s output has numerous outcomes which can be hard to quantify and therefore hard to regulate. Even as stated in the video the regulations change to allow for more transparency and real time monitoring, there still are risks. One of the main concerns about artificial intelligence is that the data used to create its program is biased. Since humans are the ones collecting the data, and have interpreted the data with some implicit assumptions that are then incorporated in to the system, this bias is then transferred to the artificial intelligence models and can lead to biased resorts in diagnosis/treatment. This is why it is so vital that when assessing these new technologies that the results are accurate. This leads me to a important topic to consider with the implementation of artificial intelligence; that of beneficence. Beneficence is the act of doing good by benefiting the patient more than doing harm. Artificial intelligence has a great potential capacity to reach a current diagnosis with more efficiency which could greatly benefit patients in time sensitive care. However, in the cases where the wrong diagnosis is stated, with confidence by artificial intelligence this could lead to greater harm to the patient. The capacity for AI to state when the answer is unknown and if more testing is needed, is crucial to the application of this technology. These drawbacks need to be critically supervised as artificial intelligence is incorporated into medicine. It is naïve to say that artificial intelligence won’t be a part of medicine in the future. All the same, we need to be careful and diligent in assessing the technology and outcomes for patients. It is important to remember, that part of healing comes from a healing touch and emotional and spiritual connectivity of humans. As technologies become more and more integrated in our society, we must prioritize and preserve our humanity.

Ответить
Madison Bieganski
Madison Bieganski - 26.04.2023 02:54

I do believe that the actual stretch to which AI can help the healthcare system may be taken too far.
I can understand the importance of using AI to consolidate data, having large amounts of information ready to go when needing to refer to treatment options and such. I can see how this saves time and resources and saves us from error at times. I can also understand the importance of wanting to be as efficient as possible with many situations in medicine. But how well does AI understand the risks and benefits of each patient? How well does AI truly follow beneficence for each individual patient? AI can’t necessarily understand the emotional or mental toll certain treatments can have on a patient outside of typically stated adverse reactions.
A major problem with this is when the patient does not follow the standard of care, when the patient does not respond the way many others have to treatments, procedures, medicine etc. Dr. Saidy states that AI can even learn from these patients who did not follow the treatment and can help come up with following steps. But this is all still algorithm backed up by some data. Do we know if that data is recent? Do we know if it follows a trend and is generalizable to other places? Do we know who took this data? This can all be questions we as healthcare professionals need to think about.
Think about a medication change – we could easily train a robot to know DDIs and which medications can be mixed with one another but what happens when a patient has an allergic reaction to a new medication and needs to replace it with something else? Further yet what if that medication used to replace the one that caused an allergic reaction would require two medication changes if a new medicine were to not work will all their existing meds? Here is where we may end up spending more money or time than we thought we saved with AI. And we could have solved the allergy and or reactions faster if a human doctor was around to supervise, or think to grab a LFT’s or genetic screening for patients with different metabolizing abilities. When we have to pick up the pieces AI left because of the critical thinking, we are taking two steps back. We have to preserve beneficence, and all the though process and considerations that surrounds what is doing best for the patient.
I will say however, there are great ways to use AI, and there should be more information on specific uses such as using it t for locating the primary site of cancer. I think there is a balance between allowing AI to take over an entire patient vs allowing AI to aid us in information we cannot see or feel with the human site or touch. But when we consider places such as an ER that decisions need to be made quickly, is there a potential for doctors to rely on this information too much since they need to work quickly on their feet? Lastly, Dr. Saidy is aware of data bias and how that could skew the information depending on a patient’s information. I believe if we want to do what is best for the patient however, these tools to ensure bias does not occur are extremely important, and manufacturers should consider perfecting these tools prior to using AI on patients and potentially having the AI misdiagnose. In the case of misdiagnosis in particular, AI could potentially lead to breaking the code of non-maleficence. If a patient was misdiagnosed, chances are their treatment is incorrect for their diagnosis. In which case, we could be causing harm to the patient without knowing it. This is where again, AI needs to be used as a backup tool not the lead tool.

Ответить
Scarlett Kass
Scarlett Kass - 25.04.2023 05:03

There is no doubt advances in technology have improved healthcare tremendously over the years and AI is no different. AI has already been shown to improve healthcare by better patient outcomes, personalized medicine and better access through its many tools. AI can aid healthcare providers in making highly informed decisions about patients diagnosis and treatment options. In the example, cancer is complicated and different for each patient and each specific type of cancer. AI can use data from the patient and other similar patients to streamline resources and give the best possible predictions. The dark side to this and many other technologies is where is the line in the sand? What are the rules and boundaries of this new technology? How do we prevent it from being used to harm patients instead of its intended good? Who or what governing body is going to decide what is okay and what is not okay? Can the AI develop biases over time which would negatively impact care? Who is legally and clinically responsible for healthcare errors when it comes to misdiagnosis, subpar treatment or even death? I think AI shows a lot of promise as a new tool to be used by people of today but I think there needs to be an organization in healthcare that objectively as possible assess the pros and cons, boundaries and limitations and how it is mot appropriately used in this setting. I think through this lens and organization then AI can be a great tool for physicians and other healthcare workers to do good by their patients- to provide creative problem solving to their unique clinical situation.

Ответить
LoveOneSV
LoveOneSV - 24.04.2023 06:06

While AI has the potential to revolutionize healthcare, there are also potential negative consequences to its use. One major concern with AI is that it may perpetuate or even amplify existing biases and discrimination in healthcare. This is because AI is only as unbiased as the data it is trained on and the way it is programmed. Since humans who create these programs have their own implicit biases, the AI will replicate it and maybe even amplify it with its efficiency. Another concern is that healthcare professionals and patients often are not educated enough to understand complex algorithms used by the AI and how it arrives at its decisions. This lack of transparency can decrease trust in the technology. Furthermore, large amounts of data is required in order to develop a reliable AI systems, much of which is sensitive medical information. This raises concerns about privacy and security risks, particularly if the breaches or unauthorized access to the data is obtained. While AI has enormous potential to improve healthcare, it is important to be aware of the potential negative consequences of its use.

Ответить
Osteopathic Doc
Osteopathic Doc - 24.04.2023 04:39

This is very interesting and I am excited to see what this has to offer. There are so many pros to this type of technology and as was mentioned, there are a lot of cons as well. It is so important that this stays highly regulated. One of the biggest issues that I could see arising are rising out of this situation is the fact that AI technology is so new. There are new problems found within the technology all the time and we are discovering new things about it every single day. The reason this is an issue is because of the consequences. When chat GPT makes a mistake It generally does not mean the life the life or death of a human being. Whereas with this technology It can very easily turn bad quickly I feel as though there needs to be more time spent in the and the world of AI before we jump to using this in A real life setting. As an example I think it would be good to use this alongside a doctor for a minimum of five years. See exactly what the doctor recommends and then compare that to what the artificial intelligence was recommending. The success rate needs to be almost perfect and these types of scenarios. Another issue that I see is in liability. If the artificial intelligence recommends certain treatments or diagnoses a patient, who is going to be liable when things go south? Is it going to be the doctor in charge because he should have known better than what the A I was saying Or is it going to be the company that generated and created the AI? Both would have strong arguments as to why it should be the other end and I feel as though this could leave the patient in a position where they cannot receive the compensation or seek justice as needed. Lastly, artificial intelligence is created by a company. For-profit companies are created to do just that, make a profit. If there are companies that are competing to have their artificial intelligence working in certain hospitals, who is to say that there will not be shortcuts taken or poor leadership that leads to disasters within the company that leads to disasters within the healthcare system. I feel as though a lot of these points that I brought up are very critical to think through before this type of technology becomes the norm. I’m sure this has been discussed many times with others but for the future of Healthcare I do hope that it is in the right hands. While a lot of things I said were geared towards the negative, I really do hope that we can see this technology working flawlessly in the future as I think it has great potential to do amazing things.

Ответить
Asya Hussain
Asya Hussain - 24.04.2023 03:53

The speaker did an excellent job of speaking on how artificial intelligence could be a potential game changer in more efficient healthcare in our future, especially in circumstances where healthcare providers struggle to create a treatment plan due to lack of a definitive diagnosis. He uses cancer related problems to discuss how AI can help better diagnose what area should be treated for chemotherapy by acquiring blood samples, diagnostic imaging, as well as other tests and uploading these components into a system that would then generate a proper diagnosis, treatment, and management plan. While I agree that technological advances have drastically changed the way we are able to function in the healthcare field as well as the advanced ways in which we are able to provide better healthcare to those in need, I think it is important to state that AI should only be used as an adjunct and never a replacement. Navid Saidy explains the limitations of using AI including that at the current state, depending on the representative pool of patient’s, there is a high chance of patient bias depending on the data set provided. I believe the biggest concern with some hospitals having AI at the forefront while others do not is the issue of justice. Justice in the context of ethical healthcare is the principle which forces us to look at if something is fair and balanced when it comes to the patient. If we are to look at an individual patient, I believe justice is taken care of. However, would AI at certain locations mean the stratification process of goods and services provided by certain hospitals would change even further in more affluent areas. There is already a clear issue of resources and quality of care depending on whether one is at an inter-city hospital versus a private owned corporation. Of course, these issues will be here whether there is AI or not, but is it just to further create a larger gap between the quality of healthcare provided? Will AI cause those who are in dire need but uninsured to suffer even more? Will AI cause a larger monopoly on the healthcare world and make quality of care become an even more “elite” privilege rather than basic human, right? These are thoughts that came to my mind, and I would love to hear any thoughts? I do agree that we should also look at how much good a system like this will bring before looking at the bad, but in today’s post-pandemic world, it is hard not to wonder how things could be negatively impacted, if at all.

Ответить
Steven Williams
Steven Williams - 23.04.2023 14:14

This is a very interesting topic, as I think artificial intelligence in the healthcare field can be very beneficial. I believe the use of artificial intelligence can aid in the diagnosis and treatment of medical diseases, but ultimately at the end of the day human decision pertaining to medical treatment and disease is supreme to an AI system. Beyond the impact of AI on employment and labor, there are many ethical considerations to consider. One thing that fundamentally makes us human is the ability to adaptive and process emotion. These are factors that make humans unique, in which the human’s ability to possess consciousness makes us apex animals. When dealing with human lives, especially life and death decisions, the concern for accountability and transparency raises the questions of if artificial intelligence can process these traits. Physicians in particular, act with empathy, compassion, dignitary, and respect toward their patients. To hold a strong doctor-patient relationship, a doctor must listen and communicate with their patients. This is vital so that patients can trust doctors with their healthcare decisions and needs. Simply, I don’t believe you can ever teach a computer these traits. You cannot program a computer for these unique circumstances each patient possesses, in which this can lead to misdiagnosis and medical treatment error – ultimately harming the patient. Also, the concept of human connection is vital in the quality of care toward the patients. Simple things like human touch can never be achieved by an AI system. In conclusion, there are certainly many benefits of using artificial intelligence within the medical field, however the potential for risk and harm leads me to argue that we can only use this type of technology with the supervision of humans. With proper education and awareness of the technology, there will be less error in the medical field, and our patients will receive the best possible care.

Ответить
Matthew Zheng
Matthew Zheng - 22.04.2023 20:46

Just like anything in healthcare that has ever been developed and adapted, AI needs to be tested in the field with live subjects. There may be casualties and collateral damage in this approach, but that's been the playbook of medical innovation with every medication and device. Informed consent is the key and the adaptation of this technology needs to be driven by clinicians with routine feedback, not regulators. Therefore, the full adaptation of AI in healthcare will take decades and will be fraught with setbacks and perverse incentives. The road towards an AI regulated health care model is going to be very long. People don't care if AI writes a poem for them, but diagnosis and management - there's going to be trust issues and may require generational turnover for human acceptance. However, in a perfect world where nothing goes wrong - AI would be amazing for healthcare as far as accuracy and efficiency.

Ответить
Shaun Schofield
Shaun Schofield - 20.04.2023 05:14

AI in medicine has the potential to bring about significant benefits in terms of improved patient outcomes, more efficient diagnoses, and reduced healthcare costs. However, there is also a risk of harm if AI is not used ethically and with caution. One significant ethical concern is the potential for maleficence, or harm caused by the misuse or unintended consequences of AI.

For example, if an AI system is not properly trained or validated, it could make incorrect or biased decisions that harm patients. Additionally, if AI is relied upon too heavily, it could lead to dehumanization of healthcare, with patients reduced to mere data points and algorithms. It is therefore essential that those developing and implementing AI in medicine prioritize ethical considerations and take steps to ensure that the technology is used safely and responsibly. The potential benefits of AI in medicine are vast, but we must also be mindful of the potential risks and take steps to mitigate them.

Ответить
Medical Student for Ethics
Medical Student for Ethics - 19.04.2023 20:46

As we shift toward a “personalized medicine” the use of AI in healthcare is inevitable. I really appreciate Dr. Navis’s comments on data bias and as a medical student I wanted to know more. I am already aware of the biases found in current medicine but had not even considered the idea that our basic medical algorithms were bias. There is a great article written by Katherine J. Igoe explaining the biases seen in medical algorithms. In this article she explains that currently our genetic and genomic data is represented by 80% of Caucasians, and thus makes our understanding of genetics geared towards Caucasians. Obviously, we cannot just ignore race when conducting genetic information and in her article, she suggests the best solution for combating the inevitable use of AI is having a diverse group of professionals and not strictly a team of data scientist. This includes have a diverse professional team consisting of physicians, data scientist, government, philosophers, and everyday civilians.

Ответить
Dan Smith
Dan Smith - 15.04.2023 05:24

Navid talks about the stability of artificial intelligence and the potential to improve care for patients. While I agree that AI can be a game changer, it could improve diagnosing and care in a lot of ways. AI will be consistent, it won’t miss things that a human will because AI doesn’t have a bad day, they aren’t affected by a patient load, they aren’t worried about 40 patients at the same time. I don’t believe that AI will drastically improve healthcare, however, I believe that it could be damaging. When discussing AI there is always one thing that is left out, the human touch. Doctors care about their patients, they dedicate their lives to learning exactly how to help them, and if they don’t, they’ve learned how to learn so that they can help them. While AI does learn and grow, they don’t have a personal connection, desire for their well-being, and an emotional connection with anyone. This is what drives physicians; nobody goes into medicine for the money or for the job itself. Yes, the money can be good, but $400k of debt to pay off to become a doctor eats up so much of it. Most doctors don’t have a typical nine to five job, they don’t go home until all the patients have been seen, the charting has been done, and the staff has gone home. If an emergency comes up, they don’t get to go home until it’s taken care of. So, why do doctors go into medicine? To help people. Every doctor is there because they genuinely care about the person they are seeing; this isn’t something that AI can ever do. Care and passion can go a long way as well, when you are passionate about something, there’s nothing you won’t do to achieve what you are after, you won’t stop working towards it until you’ve accomplished what you set out to do. If a doctor can’t figure out what’s going on, he’s going to dedicate all the time he has to figure out what to do or what is going on. That’s why AI can never replace a healthcare worker. AI doesn’t know that a patient has a wife and kids, or grandkids they care for, or foster kids they have taken in, but a doctor does. Doctors live by the principle of beneficence, to do good, and that’s something AI doesn’t understand. Now, a doctor could utilize AI to help them come to a conclusion or find the answer to a question, there are ways to take advantage of technology while still taking advantage of human care.

Ответить
Ethics
Ethics - 14.04.2023 05:06

Dr. Saidy’s talk regarding artificial intelligence(AI) in healthcare made the argument that AI can provide many benefits including making hospitals more efficient and can improve access to care by providing accurate decision-making tools. Interestingly, AI can factor in the outcomes of thousands of other patients to determine what will work best for the patient based on their individual circumstances by comparing to other patient outcomes with similar circumstances. This could provide insight into how physicians determine what treatment or procedure may be best for their patients given their patient’s specific circumstances. However, I argue that no two people and their specific circumstances are going to be identical and guarantee an identical outcome. AI could be used to make recommendations but there could be circumstances in that AI fails to factor into its algorithm even if AI continues to evolve over time and get better at its predictions for healthcare outcomes. An ethical consideration when considering implementing AI into healthcare is that AI systems can have a significant impact on patient autonomy and decision-making. This would impact patient autonomy if AI systems are used to make decisions about diagnosis, treatment, or clinical outcomes without human input. I think it’s important that AI systems are designed and implemented in a way that respects patient autonomy and preferences so for example, the patient still gets to decide what treatment would work best when presented with all the treatment options and the risks and benefits for each.

Also, if the AI algorithm is not up to date or if there is an issue with the learning process of the AI system, it could lead to the patient receiving an incorrect diagnosis or an incorrect treatment which would not lead to improved healthcare outcomes. These unintended consequences or errors due to relying on the AI system to guide diagnosis and treatment can put patient safety at risk and could cause harm to the patient as a result. This relates to the ethical principle of nonmaleficence which requires that healthcare providers do no harm to their patients. In order to comply with the principle of nonmaleficence, AI systems need to be designed and implemented in a way that minimizes the risk of harm to patients, and any potential harm is carefully considered and weighed against the potential benefits of using the AI system. Dr. Saidy brought up a valid point that Ai systems often don’t use data sets that represent people of all different races. Therefore, when AI predicts an outcome for someone of Asian race from a predominately white male data set, the prediction made by the AI system is likely to be less accurate. AI systems therefore can inadvertently perpetuate biases and discrimination against people if they are trained on biased or incomplete data. Overall, there are benefits to implementing AI and there are also risks and challenges that need to be further investigated before AI is allowed to fully predict and guide outcomes.

Ответить
MedschoolMom
MedschoolMom - 10.04.2023 19:10

Dr. Saidy did an excellent job of showing the huge impact artificial intelligence can have on healthcare. The ability to examine data collected from thousands of patients across the world would be an invaluable tool for physicians and has the potential to vastly improve care. The barriers Dr. Saidly described that are preventing AI from becoming a successfully used tool in healthcare raise an important question. Is it ethical to resist the adoption of a tool that has such huge potential to improve care and save countless lives? By not moving quickly to change regulations and implement this game-changing new technology, are we basically killing people? Of course it takes time to change regulations and set up the right controls before we can fully start using AI. But if we as physicians and healthcare providers know something will save our patients’ lives and we don’t do everything we can to use it, I think we’re violating our ethical obligation to our patients. We should do all we can to make these tools available as soon as possible. And despite all its benefits, I think we should be careful not to use AI as a crutch that replaces critical thinking and personalized care.

Ответить
Aria Zar
Aria Zar - 06.04.2023 00:45

Dr. Toosi Saidy introduced the topic of artificial intelligence very well, in addressing how the general public might see it as “villain robots taking over the world,” especially in a healthcare setting where personal information is more private and sensitive, but then goes on to explain the benefits of how AI can be a positive addition to the healthcare team. AI is such an important point to discuss in medical ethics as it may arise topics such as HIPAA and privacy laws. As artificial intelligence can be used as a base model to compare various patient data to culminate stronger and better-informed treatment plans for individual patients, it also helps to streamline medical triage processes in terms of testing and waiting. As the information can be more quickly consolidated and sorted to individualize workups and diagnoses, we do not discuss the hesitancy the patients might have in trusting automated computer processes. I feel like so quickly, the world has changed on the widespread acceptance of trusting anonymous sources and online data forms of educational material, however this might be true for younger, or more educated generations, individuals, and communities. How do we address those that express hesitancy and demonstrate pushback to AI mechanisms that clinics, hospitals, or individual healthcare professionals might be using? Although the learning algorithms are not yet in place for artificial intelligence software to be used in clinical practice regularly rather than in procedure based settings, technology is rapidly evolving along with healthcare advances and knowledge, so I wonder how soon the next era of medical practice might be here, but also how accepting various patient populations might be of it.

Ответить
Margie Mango
Margie Mango - 04.04.2023 11:15

There are four pillars of medical ethics including beneficence (doing good), non-maleficence (to do no harm), autonomy (giving the patient freedom to choose freely where they are able), and finally, justice (ensuring fairness). This Ted Talk is extremely compelling, because it touches on the potential for early cancer diagnosis using AI software. The speaker expresses the safety of implementing these practices into a clinical setting. However, what stuck out to me after listening to this was the pillar of beneficence. I feel this would have a great impact on health care beneficence, and the effort of medical personnel to do good for their patients, however my concern is the nature of blending physical human being with AI technology when it comes to diagnostic measures. If we are to implement this technology for early detection, do we have the correct treatment to tend to our patients in that stage of their progression, especially if their symptoms are undetectable clinically at the time of early detection. Additionally, justice comes to mind. This has the potential to be a great asset and advocate for justice in medicine, or the potential to do the opposite. Who would have access to this type of technology? Who is to say that insurance companies would not gain this type of knowledge and hold it against their patients, especially if there are patients in an economic situation by which they are unable to seek expensive treatment at the time. I do believe this model has great potential to help seek answers for patients and guide physicians in diagnoses that are early and accurate. However, I feel the emerging regulations the speaker talks about are important components to be considered with this technology being used regularly in practice. A specifically important topic touched on is the justice aspect with regards to the availability of the technology to recognize different skin colors and ethnicities and their respective presentations. Preventing biases within the technology is crucial in providing care that is inclusive to all patients. It is comforting to hear this model is capable of learning and adapting, and that the speaker is aware that making sure a diverse range of patients can be served equally by this technology is prioritized moving forward.

Ответить
Whopper
Whopper - 04.04.2023 07:47

Shoutout the dude in front row with the pineapple shirt

Ответить
Rick Ferns
Rick Ferns - 29.03.2023 21:36

We currently live in such an exciting time. Artificial intelligence and the speed at which it is developing has the potential to revolutionize medical care in a multitude of ways. From improving diagnosis and treatment, to advancing research and development, AI is already changing the face of healthcare. As mentioned here, AI is set to have a significant impact on medical care through the development of biodegradable implants. Traditionally, medical implants have been made of materials that are not biodegradable and can potentially cause harm to the patient if not removed or replaced. Biodegradable implants offer a safer alternative for patients, as they are designed to gradually dissolve, reducing the risk of complications and adverse reactions. AI can assist in the development and design of these biodegradable implants by analyzing data and predicting which materials and structures will be most effective inside the human body. Machine learning algorithms can help identify the best materials, shape, and size for implants, as well as predict how long they will take to biodegrade. Another way in which AI is changing medical care is through the use of predictive analytics. By analyzing large amounts of patient data, AI algorithms can predict which patients are most likely to develop certain diseases or conditions. This allows physicians to take proactive measures to prevent the onset of these conditions, potentially saving lives while also reducing healthcare costs. For example, AI can analyze data from multiple patients’ wearable devices and other sources to identify patterns that may indicate the onset of a heart attack. Physicians can then intervene before the heart attack occurs, potentially preventing serious damage to the patient’s heart and improving outcomes. AI can also be used to improve diagnostic accuracy. By analyzing patient data and identifying patterns that may be missed by human physicians, AI algorithms can help diagnose diseases and conditions more accurately and quickly. This can be particularly beneficial in areas such as radiology, where AI can assist in identifying early-stage cancer or other conditions that may be difficult to detect with traditional imaging methods. However, as with any new technology, there are ethical considerations to be taken into account with the use of AI in medical care. One of the main concerns is privacy. Patient data must be protected and kept confidential, and patients must have the ability to control how their data and statistics is used. Another ethical consideration is the potential for AI to exacerbate existing biases in the healthcare system. If AI algorithms are trained on biased data, they may produce biased results, leading to disparities in outcomes for different populations. It is therefore crucial that AI is trained on diverse and representative data to ensure that it does not perpetuate existing inequalities. That could be detrimental. AI has the potential to significantly improve medical care, but it is important that ethical considerations are taken into account. As AI continues to evolve, it is vital that we remain mindful of these considerations to ensure that it is used in a way that benefits patients and society as a whole.

Ответить
Jay Lee
Jay Lee - 24.03.2023 21:55

Dr. Navid Toosi Saidy’s makes an interesting case for potentially implementing AI into medicine. While there are many potential benefits as highlighted throughout his talk, from diagnosing and treating cancer to optimizing the use of genetic information, there are also many ethical aspects that should be taken into consideration. One issue that could potentially arise would fall under the ethical pillar of justice in medical ethics. If the data being used to create these AI algorithms is not all encompassing, certain patient populations may be at risk for poorer health predictions simply because the AI algorithm is not coded to take every aspect of each individual into consideration. This could lead to an unfair distribution of care which would be quite problematic for the future of medicine. Artificial intelligence in medicine could also potentially lead to healthcare providers overlooking and missing diagnoses in patients because they could become too dependent upon the technology. That risk of losing human connection is something to always think about when talking about technological advances, especially when it comes to something as personal as someone’s healthcare. I did find the aspect of emerging regulation to be quite encouraging and a promising way to allow AI to evolve with and adapt to the ever-changing medical field. I feel adaptability will be a huge part of what would make AI successful in medicine. I am interested to see how we can all work together to make artificial intelligence progress in a positive direction for the healthcare field.

Ответить
Angel Cruz
Angel Cruz - 22.03.2023 21:42

Dr. Saidy made an interesting case of potentially using AI for the future of healthcare. He talks about the best of AI and some regulations that would need to be used. There are some ethical components that came to mind with this AI use. AI algorithms have a risk of perpetuated biases of certain populations if the data used to train them isn’t representative. I loved how Dr. Saidy was able to address this and how they are designing it to be fair and unbiased which is amazing progress. Another thing I was wondering was the transparency of the AI systems. It seems AI systems can be opaque, making it difficult for patients and healthcare providers to understand how the decisions are made. It is important to ensure that the AI systems are explainable so that patients and healthcare providers can understand how decision are made. I feel it is also important to ensure that there are clear lines of responsibility for the decision made by AI systems and there some mechanisms in place to address errors and/or mistakes. Overall, the healthcare system can benefit greatly from AI systems and being able to advance technology to help patients will be a great accomplishment. These are just some thoughts I had of things that need to be considered as we advance forward into artificial intelligence in healthcare.

Ответить
Baljit Singh
Baljit Singh - 16.03.2023 20:23

Good news.. now we have AI to reduce physician burnout as well.

Ответить
grayy
grayy - 13.03.2023 20:04

que nub 💀💀☠☠🥸🥸

Ответить
Morgan Malloy
Morgan Malloy - 12.03.2023 23:54

I agree with Dr. Saidy that there are a lot of potential benefits to patients using AI. The ability to gather and compare large amounts of data is a huge benefit. Creating a global database where labs, imaging, and even genomic tests could be stored and used to diagnose patients would be revolutionary. Dr. Saidy discussed the possibility of this AI being able to differentiate between races to provide more accurate diagnoses. There is no doubt that the use of this AI would save patients’ lives by cutting the diagnoses time down. However, it makes me think of the patients that this won’t help. As healthcare providers, it is essential to take a thorough patient history. Some research states that as much as 70% of patients can be diagnosed with the physician taking a comprehensive history. Some patients are not clear cut, and their medical history plays a vital role is providing them with an accurate diagnosis. If these patients had their information uploaded into the AI they could very easily be misdiagnosed. Particularly if their illness closely mirrors another illness and the only defining feature is found in the patient’s history or physical exam. This violates the ethical principle of non-maleficence which means to do no harm. We would be harming our patients in this case by neglecting the personalized care that is so imperative in medical diagnoses. I can see this being useful in rural areas where specialist aren’t readily available to provide the extensive workups that some patients need. But I don’t think that this should be used as substitute or even as a replacement to seeing a physician. There are other medical ethical issues when it comes to using AI. One of them is safety and privacy. By downloading a patient’s private health care information into a global database to compare it with others, you are ultimately sending out that patient’s protected health information. This would be a violation of HIPPA unless of course the patient’s signed a consent form to allow for their information to be uploaded into the AI system. Another consideration is that of informed consent. With medical procedures healthcare providers usually must provide informed consent to patients. Would that be required before using AI to help diagnose a patient? I think it’s something that needs to be taken into consideration.

Ответить
Foxmed
Foxmed - 12.03.2023 13:03

Where did you make this presentation ,sir?!

Ответить
Shaylee Renner
Shaylee Renner - 02.03.2023 19:39

Navid discussed how Artificial Intelligence (AI) can improve healthcare and I completely agree. He pointed out all the benefits of AI in healthcare such as the ability to collect more data quicker, does not have unconscious biased, and more accurate diagnoses all of which, would help save lives. I assumed data would be entered into a software from all over the world and able to be accessed within seconds. That is faster and more thorough than any physician or laboratory communication I’ve ever witness for diagnosing patients. One broad example would be COVID; could it have been potentially isolated sooner if we would have known the severity and consequences of the illness when it first began? The downfall of this information being at everyone’s fingertips is when the treatment is unavailable in one country but not another. I imagine that would create an emotional challenge for the physician and patient if there is a known successful treatment yet unable to get access to it however, I feel that could eventually be resolved. Another concern would be the many exceptions we see in medicine. The AI may only output the most common symptoms or treatments which would cover most cases but the few numbers of “abnormal” cases may not benefit from an AI program. It would be nearly perfect if the AI could give an “I don’t know” answer as Navid suggested.
Unlike human physicians, AI will have no ability to create unconscious biased ever. Each patient case will input of their illness or disease, the AI will compare the individual’s information to a large data set and that’s it! It cannot take into consideration the “type” of patient such as a helplessness patient or difficult patient. It cannot see past experiences of patients such as substance abuse. Both of which, I think, can influence a human physician’s opinions and potentially patient treatment. That being said, sometimes knowing the patient’s personality does positively influence the type of treatment that would be best because a compliant patient is better than patient that refuses treatment.
Our world is getting large and populated at a fast rate, and no human can keep up. Although there is no way AI can take over completely. Humans have higher order thoughts and emotions that an AI is far from containing. As long as the AI can continuously gather data from around the world, adapt to the collected data, and there is still human oversight, I think AI would be a huge benefit to medicine.

Ответить
df
df - 01.03.2023 21:36

AI shows an immense promise in the field of healthcare. At surface level, it could vastly improve health outcomes by ensuring that the most specific treatment is presented to the patient. It allows the provider to synthesize all pieces of information and be given a patient-specific plan that would, in theory, provide the best possible care for that patient. There are inherent issues with the implication of AI in healthcare specifically, though. Diminished privacy, social determinants of health, lack of justice, and lack of non-maleficence are a few ways in which AI could be harmful for the healthcare population. First of all, any time that computers and automatic systems are in place introduces the concern that there will be a breach in privacy. With any system comes the threat of hackers getting the information and leaking protected, personal information of patients. This is a large concern due to HIPAA. Patients expect privacy. They should never have to worry about a third-party system leaking their information. Not only are data leaks a concern, but if AI is constantly learning how to be smarter and better, it is assumed that it is using patient’s information to do so. Next, if AI is to be implemented in healthcare, it must be assured that all patients will have equal equitable access to the services. There needs to be specific processes for all races and backgrounds in order to best serve patients. If this is solely based from one subset of the population, it will introduce new healthcare barriers. We also need to ensure that this is affordable and covered by insurance in order to make it accessible. Lastly, we must ensure that no ethical principles are being broken during the implementation of AI into healthcare. Justice is needed to ensure fairness to all whom benefit from AI in healthcare. Non-maleficence is to do no harm. This is concerning when dealing AI because medicine has been built on physicians who study endlessly in order to provide the best care for patients. While AI might factually think that a treatment is “best” for a patient, it is important to consider that no two patients are the same. We must take into consideration the entire mind, body, and spirit of a patient in order to best treat them. With all of this said, I believe that AI may have a significant future in medicine, but it is equally as important to make sure that it is done the right way.

Ответить
Qq
Qq - 28.02.2023 09:07

Ai can saves lives…
Isn’t that how it always starts ?? Before they take lives !?! 🙄

Ответить
Julie Parker
Julie Parker - 27.02.2023 07:37

What happened to ask the patients if they want this ???
They should decide…. Not the doctors not hospitals, not insurance companies or the government!!!
Patient make the decision!

Ответить
Montanagal
Montanagal - 22.01.2023 01:28

Maybe they will make a robot nurse and solve the world's problems (doubt compassion or empathy can ever be replicated) but does anyone care?

Ответить
aaronldsouza
aaronldsouza - 16.01.2023 05:30

I appreciate the attempt at simplifying a complex field of computing science. What was missing is commentary on the ethics behind the use of emergent technologies such as AI. This matters because training an AI system requires both negative and positive outcomes to be fed back into the system. The subject of care must provide their informed consent AND understand the experimental nature of the technology. The regulatory frameworks in place, exist to help ensure that negative outcomes are reduced (based on experience) AND that there is a responsible person for the decision making that led to the outcome. Who is that responsible person with an AI system? The clinician who blindly trusts the AI, or the engineer who coded the algorithms? As a health care professional and digital health leader I suggest that until those aspects are established from a medico-legal, liability, education and socio-technical point of view for clinicians - we should be rigorously challenging the message of simplicity put forward here SO that AI can be a trustworthy tool.

Ответить
Roberto Fabila
Roberto Fabila - 15.09.2022 18:05

Could someone help me in explaining me in me what does it mean Regulatory frameworks in medical terms ? Or provide me an example ?
Thanks :)

Ответить
Anudeep.A.V
Anudeep.A.V - 06.08.2022 17:02

Artificial intelligence can really make medicines for immortality and make our earth heaven for those who are choosed , and save latest technologies

Ответить
Shruti Sketches
Shruti Sketches - 31.07.2022 09:46

I am a newbie.. in med don't know where exactly to start , but I am sure I am interested in creating something that helps people.

Ответить
Shruti Sketches
Shruti Sketches - 31.07.2022 09:41

A job model to advance the software of ai might exist

Ответить
Mariyam Farooqui
Mariyam Farooqui - 11.07.2022 18:38

What qualifications need

Ответить
K F
K F - 02.06.2022 11:17

Sounds motivational for future generations of scientists!

Ответить
Rafay Ansari
Rafay Ansari - 07.05.2022 14:42

I am 3rd year MBBS student. I love computers & technology more than medical. And my vision is to work in AI in healthcare, IoMT, and in other fields of technological innovation in health care. I know basics of python, basics of C language, worked with aurdino and Raspberry Pi. And general knowledge of AI and ML. Im really passionate and have potential but not clear about the way I should take. Plz help!

Ответить
carnby080808
carnby080808 - 05.05.2022 22:36

This video helped me get my degree.

Ответить
TechnoKratZ
TechnoKratZ - 10.03.2022 12:36

ai learns from the new data. For an AI dev, he/she knows how much new data is needed to fine-tune the existing model.

Ответить
Seeker296
Seeker296 - 04.03.2022 17:16

The only jobs to survive AI revolution will be those that have very small data pools available or require massive creativity and novelty (more so than art and music)

Ответить
Surabhi Srivastava
Surabhi Srivastava - 27.02.2022 19:48

Future of pharmacy professional in this??

Ответить
Dr.Devashish Srivastava
Dr.Devashish Srivastava - 05.12.2021 19:02

AI is the future of effficient care in health sector

Ответить