6 of 6 standards met
TL;DR I'm trying to benchmark Infer to address performance regressions across multiple ocaml versions (https://github.com/ocaml/ocaml/issues/14047) and would appreciate some help in successfully building Infer on OCaml 5.4.0 and above. Some preliminary improvements are noted below as motivation Steps to reproduce 1. Build infer release v1.2.0 from source (targets ocaml 4.14.x) 2. Build infer on OCaml 5.3.0. 3. Build infer on OCaml 5.4.0 using custom switch - build using 4. Use an openssl release for analysis (openssl 1.0.2d or 1.1.1g) as mentioned in the original issue. Instructions provided in the previously reported issue (Refer https://github.com/ocaml/ocaml/issues/14047) mostly work with some minor adjustments for stricter package versioning (edit the lockfile) and missing packages (use the opam repository archive). I've noted down some helpful tips in case they might be useful https://gist.github.com/curche/d88e1e317507d392877295815989a537 Expected behavior Successfully build infer on three different versions. Actual behavior Large observable performance regressions between first two version as noted in above issue which I'm hoping to help investigate. Unable to build Infer on OCaml 5.4.x (future official releases of which hope to add runtime improvements that might be interesting for performance & memory reasons). Other details Release notes for OCaml 5.4.0 - https://ocaml.org/releases/5.4.0 Runtime results Sidenote: I wasn't sure whether each infer-analyze needs to be preceded by an infer-capture step or whether the output repo infer-out that infer-capture produces can be reused. Assuming that one capture can work on multiple analyze steps, I tried to get some average execution times The v1.2.0 release build of Infer targets OCaml 4.14.0 and has the following results On 5.3.0 we have One avenue of addressing runtime performance is improving the compaction algorithm in the GC (disclaimer: not by me but by other collaborators I've been in touch with; I'm just trying to collect benchmark results to note down any improvements/changes). While initially targeting 5.4.0 (or trunk/head branch of ocaml/ocaml), there's a backport made available for 5.3.0 where I'm able to build Infer currently Running Infer built using said modified compiler, we have From ~139s to ~126s, around 9% difference (although we lose some in --no-multicore). Note that this is on default runtime parameters. For more rigorous tests, we'll at least need to look at different test cases (for eg, openssl 1.0.2d and 1.1.1g), on different machines, using varying no of jobs & heap sizes. However, I feel like this looks promising enough to further try to get infer building on OCaml 5.4.x (also see table below) Now, using runtime_events_tools aka olly, one can get a trace output which can be viewed through perfetto (this uses fuchsia trace format) (Eg: ) Based on trace results, compaction runs are observed to go down from on average >2sec to <1.4s. However, the total no of compactions that happen during analyze stayed pretty much the same (~111). There seems to be a large number of compactions in general. One possibility is the no of manual compactions being triggered. However, looking at infer's DomainPool and ProcessPool across versions, was present before as well and so needs further investigation on what might be happening. Just for completeness, I tried commenting out from DomainPool & ProcessPool and noticed some difference and furthermore, basing that on top of new compactor changes we have Aside: Above mentioned adds corresponding Domain.loop & waiting periods which can flood the ring buffer and can potentially help explain related lost events and crashes in the observability tool (Previously reported in https://github.com/tarides/runtime_events_tools/issues/63). Patching it out does in fact result in fewer/zero lost events when running olly and leads to more reliable gc-stats result. This is still a valid reason to improve olly for better, faster ring buffer reads. To summarize, here's a table of different runtime values ( includes compactor changes, comments out manual compaction) Notable issues on getting infer building on 5.4.x One option is to stick to 7d504cc505 and patch it to work with updated dependencies. Alternatively, getting the latest HEAD commit building on 5.4.x is preferable so that recent changes can be accounted for. Issues I've ran into are: certain dependencies have updates which have breaking changes (in my testing I have observed errors involving ppxlib & containers to name a few) when building from HEAD/main branch/recents commits, building clang plugin fails. It looks like there are differing llvm versions in the prepare-clang.sh script and the custom opam package (there was a recent change from LLVM20 to LLVM19) Since infer has been useful for observing noticeable performance changes, I think it'd be a great project to keep improving the ocaml runtime for 5.4.x as we try out case studies based upon ocaml software in production (and in turn, help bring actual multicore advantages to infer). If there are any benchmark suites which you can point me to or brief examples that cover different aspects of Infer where there are similar performance and memory bottlenecks, please feel free to add to this issue or comment on my above github gist. This'll greatly help coverage of more runtime aspects & infer aspects
This PR introduces initial translation of a small subset of rust programs. We moved from creating our own library for translating rust programs, to use the charon library. This libraries can digest rust programs and crates with their dependencies and turning them into a uniform and usable shape stored in a json file which can be read by charons OCAMl bindings. The main changes / additions in this pr are as follows: : Changed from loading in textual files by our external library to read and translate the json file produced by charon. : The main addition where the actual translation from rust to textual happends. : The test folder, the folder is structed as followings: : This folder contains test files where each test has three files: : The rust program : The intermediate representation of the rust program in charons json format : The intermediate representation of the rust program in human readable format : Contain the expected output of the tests The next steps are 1. To implement the following rust types: Non-Empty Tuples: Empty tuples are treated as the unit type and are already handled. The next step is to implement non-empty tuples. Arrays: Do not hard-code the array type as int; instead, use the type information from builtin_ty provided by Charon. ADTs: ADTs other than tuples and arrays must also be implemented to support structs. Aggregates: Empty aggregates are already handled, but non-empty aggregates must be supported as well. String and Box:** Heap-allocated types such as strings and boxes must be supported. 2. Model some of rusts standard library in textual 3. Expand tests, and add end to end tests in For more details and outputs from sample programs, the report: Infer_Rust_Frontend_Implementation___Testing.pdf For a overview of the translation rules from rusts intermediate representation you can refer to the following report: translation rules.pdf Would love to hear your feedback!
Repository: facebook/infer. Description: A static analyzer for Java, C, C++, and Objective-C Stars: 15532, Forks: 2072. Primary language: OCaml. Languages: OCaml (72.4%), Java (6.9%), C++ (6.6%), Objective-C (3.2%), Makefile (2.6%). License: MIT. Homepage: http://fbinfer.com/ Topics: c, code-quality, cpp, java, objective-c, static-analysis, static-code-analysis. Latest release: v1.2.0 (1y ago). Open PRs: 10, open issues: 403. Last activity: 8h ago. Community health: 87%. Top contributors: jvillard, jberdine, dulmarod, skcho, ngorogiannis, sblackshear, davidpichardie, jeremydubreil, ezgicicek, mbouaziz and others.
OCaml
Last 12 weeks ยท 202 commits
Adds support for boxes for pirmitive types. Adds a new builtin function boxnew with a model that allocates data on the heap and returns a pointer. Added support for rust builtins by adding a default return type and default_type for rust Since rust type system is quite elaborate with heap allocated types support for adding charon arguments is added to remove some of the boiler plate with arguments like --hide-marker-trait and --hide-allocator. Additionaly we give a warning if we encounter type definitions or functions that cannot be translated instead of dying so that it is still possible to analyze the functions we do support.