Search company, investor...

Founded Year



Corporate Minority | Alive

About IBID

Ibid provides and implements solutions for purchasing processes in supply chain management. It is based in Florianopolis, Brazil.

Headquarters Location

Cia Primavera SC 401, km 4 Rodovia José Carlos Daux, 4150 Rooms 01 and 02



+55 (11) 4118 7100



Latest IBID News

Worldwide: Current Status Of AI Policy Developments In Canada And Abroad - Miller Thomson LLP

Nov 27, 2023

To print this article, all you need is to be registered or login on Ever since generative artificial intelligence("AI") technologies have been adopted ona large scale by both businesses and private users in 2023,regulators appear to have increased the speed and intensity oftheir efforts to propose regulations targeted at the developmentand use of AI. Certainly, the public eye has never been morefocused on the topic than is the case currently. In this article, we provide a status update on these efforts,both in Canada and abroad. Clearly, there is profound andwidespread agreement that AI requires specific regulation, butthere is also significant divergence regarding how it should applyto, for example, foundational models (e.g. ChatGPT), what types ofsystems are "high impact," and whether certainapplications should be prohibited altogether. CANADA'S ARTIFICIAL INTELLIGENCE AND DATA ACT Canada's proposed Artificial Intelligence and DataAct ("AIDA") was introduced as partof the Digital Charter Implementation Act, 2022("Bill C-27"), to provide guardrails forthe responsible design, development, and deployment of AI systemsin Canada. Since its introduction, AIDA has been under significant scrutinyfor a number of reasons, notably that 1) it purports to regulate"high impact systems," without actually defining whatthose systems would encompass, 2) it was drafted without thenecessary public consultation, and 3) it does not cover uses of AIby government agencies or law enforcement. In recent meetings of the Standing Committee on Industry andTechnology (the"Committee")1, it was alsopointed out that the definition of "artificial intelligencesystem" in AIDA may not be perfectly aligned with definitionsin other jurisdictions and contexts. The below demonstrates thedifferent definitions of AI system in Canada and the EU: Canada's AIDA: "Artificialintelligence system" means a technological system thatautonomously or partly autonomously, processes data related tohuman activities through the use of a genetic algorithm, a neuralnetwork, machine learning or another technique in order to generatecontent or make decisions, recommendations or predictions. EU AI Act: "Artificial intelligencesystem" (AI system) means software that is developed with oneor more techniques and approaches listed in Annex I and can, for agiven set of human-defined objectives, generate outputs such ascontent, predictions, recommendations or decisions influencing theenvironments they interact with. While the EU AI Act does not limit the definition in respect ofdegree of autonomy, AIDA only applies to technological systemswhich process data autonomously or partly autonomously. Due to the cross-border development and use of AI systems it isimportant that what is deemed "AI" in Canada isconsistent with the US and EU, in particular. It was also notedthat given that the regulation of AI spans broadly across manydisciplines and actors in the economy, there should be anindependent regulator of AI in Canada, as opposed to mere oversightby the Ministry. On October 3, 2023, the Minister of Innovation, Science andIndustry of Canada, François-Philippe Champagne wrote to theCommittee suggesting amendments to Bill C-27.2 Thesuggestions included the following amendments specificallypertaining to AIDA: defining classes of systems that would be considered highimpact, including seven classes of systems in respect of mattersrelated to: determinations in respect of employment; determinations as to whether to provide an individual withservices, the cost for those services and the prioritization ofservices; using biometric information in respect of identityauthentication or determinations of one's behaviour or state ofmind; moderation of, or presentation of, content to individuals; health care; the exercise and performance of law enforcement powers, dutiesand functions. specifying distinct obligations for generative general-purposeAI systems, like ChatGPT, such that, before placing the system onthe market or putting into service: impact assessments would beconducted; measures to mitigate the risk of bias be put in place,which measures are to be tested to ensure effectiveness;plain-language descriptions of the capabilities and limitations ofthe system and risk and mitigation measures are prepared;compliance with regulations are to be ensured; clearly differentiating roles and obligations for actors in theAI value chain including in relation to the high-impact classesoutlined above; strengthening and clarifying the role of the proposed AI andData Commissioner; and aligning with the EU AI Act as well as other advanced economiesby making changes to key definitions such as AI and enumeratingfurther responsibilities and accountability frameworks on personsdeveloping/marketing/managing large languagemodels.3 From these proposed amendments, it is clear that more changesare required before the AIDA is enacted. CANADA'S VOLUNTARY AI CODE OF CONDUCT As we have previously reported, the Canadian governmentrecognized the need for some guidance and foundational principlesin the interim period between widespread use of generative AI andthe coming into force of AIDA. With that, the "VoluntaryCode of Conduct on the Responsible Development and Management ofAdvanced Generative AI Systems" (the "Codeof Conduct") was published.4 The Code of Conduct sets voluntary commitments that industrystakeholders can implement to demonstrate responsible developmentand management of generative AI systems. It outlines six core principles including: Accountability: Organizations will implement aclear risk management framework proportionate to the scale andimpact of their activities. Safety: Organizations will perform impactassessments and take steps to mitigate risks to safety, includingaddressing malicious or inappropriate uses. Fairness and equity: Organizations will assessand test systems for biases throughout the lifecycle. Transparency: Organizations will publishinformation on systems and ensure that AI systems and AI-generatedcontent can be identified. Human oversight and monitoring: Organizationswill ensure that systems are monitored and that incidents arereported and acted on. Validity and robustness: Organizations willconduct testing to ensure that systems operate effectively and areappropriately secured against attacks.5 The Code of Conduct provides a temporary solution for thecurrent lack of legislation governing AI. It remains to be seen howthe Code of Conduct is perceived by key stakeholders and whether itreceives substantial adoption. UK'S AI SAFETY SUMMIT – "THE BLETCHLEYDECLARATION" On November 1, 2023, a number of international governments,leading AI companies, civil society groups and AI researchers, metat the AI Safety Summit to consider the risks of AI and discuss howsuch risks could be mitigated by the international community. On the opening day of the summit, a declaration was signed by 28countries (including, Canada, China, the US and the UK) and the EU.The declaration, which is being referred to as the "BletchleyDeclaration," establishes collaboration between these nationsto take a common approach to AI and provides an agenda foraddressing AI risk which consists of two action items: identifying AI safety risks of shared concern, building ashared scientific and evidence-based understanding of these risks,and sustaining that understanding as capabilities continue toincrease, in the context of a wider global approach tounderstanding the impact of AI in our societies; and building respective risk-based policies across our countries toensure safety in light of such risks, collaborating as appropriatewhile recognising our approaches may differ based on nationalcircumstances and applicable legal frameworks. This includes,alongside increased transparency by private actors developingfrontier AI capabilities, appropriate evaluation metrics, tools forsafety testing, and developing relevant public sector capabilityand scientific research.6 The Bletchley Declaration shows a commitment to tackling AI asan international challenge at the outset and provides a promisingpath towards an internationally-harmonized approach to regulatingAI. EU AI ACT In April 2021, the European Commission introduced the inauguralregulator framework for regulating AI in the form of the EU AI Act.The proposal entails the assessment and categorization of AIsystems applicable across various uses, based on the potentialrisks they present to users. The varying levels of risk willdetermine the extent of regulatory measures imposed. The EU AI Act would classify AI systems by risk and mandatevarious development and use requirements. Its focus is onstrengthening rules around data quality, transparency andaccountability. Initially, it appeared that the EU would be the firstjurisdiction to govern AI. However, in June of this year,amendments to the draft AI Act7 raised concerns. Inparticular, the June changes included a ban on the use of AI forbiometric surveillance, and, more controversially, for generativeAI systems (like ChatGPT) to disclose AI-generatedcontent.8 As a result, the EU's AI Act negotiationsare in a deadlock as large EU countries, including France, Germany,and Italy, seek to retract the proposed approach for regulatingfoundation models in AI.9 Foundation models, such asOpenAI's GPT-4, have become a focal point in the late stages ofthe legislative process. A tiered approach for regulating thesemodels, with stricter rules for more powerful ones, was initiallyconsidered but is now contested by some large European nations.Opposition is driven by concerns from companies like Mistral inFrance and Aleph Alpha in Germany, fearing potential disadvantagesagainst US and Chinese competitors.10 The next meeting is set for December 6, 2023 which is asignificant deadline given upcoming European elections.11 If a resolution is not reached, the entire AI Act ispotentially at risk. US EXECUTIVE ORDER On October 30, 2023, the White House enacted an Executive Order("EO") focusing on the safe andresponsible use of AI in the United States.12 The EOmandates key executive departments develop standards, practices,and potential regulations within three to twelve months, coveringthe entire AI lifecycle. While immediate regulatory changes arelimited, the order urges federal regulators to utilize existingauthority to assess AI system security, prevent discrimination,address employment issues, counteract foreign threats, andalleviate talent shortages. The federal government has committedresources and authority to ensure the ethical use of AI in varioussectors, with anticipated developments in guidelines and rules overthe next year likely leading to significant new requirements. Building on voluntary commitments ushered in by the USgovernment in July,13 the EO moves the US closer tocomprehensive AI legislation. In particular, unlike prior effortsto govern AI, the EO creates tangible obligations for bothgovernmental bodies and technologies companies rather than simplyproviding general principles and guidelines. In particular, itrequires that developers of AI systems share safety test resultswith the government. Footnotes 1. Minutes of Proceedings dated November 7, 2023,Standing Committee on Industry and Technology, . 2. Office of the Minister of Innovation Science andIndustry, letter from the Honourable Francois-Phillipe Champagne toMr. Joel Lightbound, online(pdf): 3. Ibid. 4. Government of Canada, "Minister Champagnelaunches voluntary code of conduct relating to advanced generativeAI systems," News Release (September 27, 2023) online: 5. Ibid. 6. UK Government, "Policy Paper: The BletchleyDeclaration by Countries Attending the AI Safety Summit, 1-2November 2023," November 1, 2023, online: 7. European Parliament, Amendments adopted by theEuropean Parliament on 14 June 2023 on the proposal for aregulation of the European Parliament , June 14, 2023 online: ;see also: European Parliament, "EU AI Act: first regulation onartificial intelligence," News Release (June 14, 2023) online: 8. European Parliament, "EU AI Act: first regulationon artificial intelligence," News Release (June 14, 2023)online: 9. See Open Letter to the Representatives of the EuropeanCommission, the European Council and the European Parliament,online (pdf): 10. Luca Bertuzzi, "EU's AI Act negotiations hitthe brakes over foundation models" Eurative, November10, 2023, online: 11. Jillian Deutsch, "The EU's AI ActNegotiations Are Under Severe Strain," Bloomberg,November 16, 2023, online: 12. The White House, "Executive Order on the Safe,Secure, and Trustworthy Development and Use of ArtificialIntelligence" News Release (October 30, 2023) online: 13. The White House, "FACT SHEET:Biden-⁠Harris Administration Secures Voluntary Commitmentsfrom Leading Artificial Intelligence Companies to Manage the RisksPosed by AI" News Release (July 21, 2023) online: The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances. AUTHOR(S)

IBID Frequently Asked Questions (FAQ)

  • When was IBID founded?

    IBID was founded in 2006.

  • Where is IBID's headquarters?

    IBID's headquarters is located at Cia Primavera SC 401, km 4 Rodovia José Carlos Daux, 4150, Florianopolis.

  • What is IBID's latest funding round?

    IBID's latest funding round is Corporate Minority.

  • Who are the investors of IBID?

    Investors of IBID include Hurst Capital.



CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.