All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a question about customising my time picker. I'd like to display two panels, one for 24 hours and one for 1 month. And I'd like panel 1 to be displayed when the teps selected is 24h... See more...
Hello, I have a question about customising my time picker. I'd like to display two panels, one for 24 hours and one for 1 month. And I'd like panel 1 to be displayed when the teps selected is 24h, and the second panel to be displayed when the time picker is for the current month.   I tried this, but it doesn't work : <form version="1.1" theme="light"> <label>dev_vwt_dashboards_uc47</label> <init> <set token="time_range">-24h@h</set> <set token="date_connection">*</set> <set token="time_connection">*</set> <set token="IPAddress">*</set> <set token="User">*</set> <set token="AccessValidation">*</set> </init> <!--fieldset autoRun="false" submitButton="true"> <input type="time" token="field1" searchWhenChanged="true"> <label>Period</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset--> <fieldset autoRun="false" submitButton="true"> <input type="dropdown" token="time_range" searchWhenChanged="true"> <label>Select Time Range</label> <choice value="-24h@h">Last 24 hours</choice> <!--choice value="@mon">Since Beginning of Month</choice--> <default>Last 24 hours</default> <!--change> <condition value="-24h@h"> <set token="tokShowPanel1">true</set> <unset token="tokShowPanel2"></unset> </condition> <condition value="@mon"> <unset token="tokShowPanel1"></unset> <set token="tokShowPanel2">true</set> </condition> </change--> </input> </fieldset> <row> <panel> <input type="text" token="date_connection" searchWhenChanged="true"> <label>date_connection</label> <default>*</default> <prefix>date_connection="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="time_connection" searchWhenChanged="true"> <label>time_connection</label> <default>*</default> <prefix>time_connection="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="IPAddress" searchWhenChanged="true"> <label>IPAddress</label> <default>*</default> <prefix>IPAddress="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="User" searchWhenChanged="true"> <label>User</label> <default>*</default> <prefix>User="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="dropdown" token="AccessValidation" searchWhenChanged="true"> <label>AccessValidation</label> <default>*</default> <prefix>AccessValidation="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> <choice value="*">All</choice> <choice value="failure">failure</choice> <choice value="success">success</choice> <choice value="denied">denied</choice> </input> </panel> </row> <row> <panel id="AD_Users_Authentication_last_24_hours" depends="$tokShowPanel1$"> <title>AD Users Authentication</title> <table> <search> <query>|loadjob savedsearch="anissa.bannak.ext@abc.com:search:dev_vwt_saved_search_uc47_AD_Authentication_Result" |rename UserAccountName as "User" |search $date_connection$ $time_connection$ $IPAddress$ $User$ $AccessValidation$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">100</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Last Connection Status"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> <format type="number" field="AuthenticationResult"></format> <format type="color" field="AuthenticationResult"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access_Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="AccessValidation"> <colorPalette type="map">{"success":#118832,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="last_connection_status"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> </table> </panel> </row> <row> <panel id="AD_Users_Authentication_1_month" depends="$tokShowPanel2$"> <title>AD Users Authentication</title> <table> <search> <query>|loadjob savedsearch="anissa.bannak.ext@abc.com:search:dev_vwt_saved_search_uc47_AD_Authentication_Result" |rename UserAccountName as "User" |search $date_connection$ $time_connection$ $IPAddress$ $User$ $AccessValidation$</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Last Connection Status"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> <format type="number" field="AuthenticationResult"></format> <format type="color" field="AuthenticationResult"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access_Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="AccessValidation"> <colorPalette type="map">{"success":#118832,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="last_connection_status"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> </table> </panel> </row> </form>
Hi, I am trying to tie multiple events describing single transaction together. This is my test example:   Event   Oct 21 08:19:42 host.company.com 2024-10-21T13:19:42.391606+00:00 hos... See more...
Hi, I am trying to tie multiple events describing single transaction together. This is my test example:   Event   Oct 21 08:19:42 host.company.com 2024-10-21T13:19:42.391606+00:00 host sendmail[8920]: 49L2pZMi015103: to=recipient@company.com, delay=00:00:01, xdelay=00:00:01, mailer=esmtp, tls_verify=NONE, tls_version=NONE, cipher=NONE, pri=261675, relay=host.company.com. [X.X.X.X], dsn=2.6.0, stat=Sent (105f7c9d-76a2-a595-e329-617f87ba2602@company.com [InternalId=19267223300036, Hostname=HOSTNAME.company.com] 145203 bytes in 0.663, 213.865 KB/sec Queued mail for delivery)   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.715034+00:00 host filter_instance1[31332]: rprt s=42cu1tbqet m=1 x=42cu1tbqet-1 mod=mail cmd=msg module= rule= action=continue attachments=4 rcpts=1 routes=allow_relay,default_inbound,internalnet size=143489 guid=jb9XbZ5Gez432DgKTDz22jNgntXrF6xb hdr_mid=105f7c9d-76a2-a595-e329-617f87ba2602@company.com qid=49L2pZMi015103 hops-ip=Y.Y.Y.Y subject="Your Weekly  Insights" duration=0.095 elapsed=0.353   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.714759+00:00 usdfwppserai1 filter_instance1[31332]: rprt s=42cu1tbqet m=1 x=42cu1tbqet-1 cmd=send profile=mail qid=49L2pZMi015103 rcpts=recipient@company.com   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.675365+00:00 host sendmail[15103]: 49L2pZMi015103: from=sender@company.com, size=141675, class=0, nrcpts=1, msgid=105f7c9d-76a2-a595-e329-617f87ba2602@company.com, proto=ESMTP, daemon=MTA, tls_verify=NONE, tls_version=NONE, cipher=NONE, auth=NONE, relay=host.company.com [Z.Z.Z.Z]   I can extract message id (105f7c9d-76a2-a595-e329-617f87ba2602@company.com) and qid (49L2pZMi015103) from the topmost message and tie it this way to the bottom one, but this is only two events out of series of four.  How would I generate complete view of all four events? I am looking to get sender and recipient SMTP addresses, subject and message sizes from top and bottom event. Any help would be greatly appreciated.
I recently installed a Splunk Edge Processor and i noticed it's not listening on port 9997.  I can see it as a node on the Splunk Cloud Platform but I can't send on-prem data from my universal forwar... See more...
I recently installed a Splunk Edge Processor and i noticed it's not listening on port 9997.  I can see it as a node on the Splunk Cloud Platform but I can't send on-prem data from my universal forwarders to it because it's not listening to port 9997.   When I check the ports that it's currently listening to, here are the results: ss -tunlp Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 0.0.0.0:44628 0.0.0.0:* udp UNCONN 0 0 0.0.0.0:161 0.0.0.0:* udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:37139 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=7)) tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:8888 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=8)) tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* users:(("splunkd",pid=983,fd=4)) tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* tcp LISTEN 0 128 127.0.0.1:44001 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:43335 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=3)) tcp LISTEN 0 128 127.0.0.1:199 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:1777 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=11)) tcp LISTEN 0 2048 192.168.66.120:10001 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:10001 0.0.0.0:* As you can see, 9997 is not in there.  I confirmed the shared settings for this node to make sure that it's expected to receive data on that port:   Splunk forwarders The Edge Processor settings for receiving data from universal or heavy forwarders. Port 9997 Maximum channels The number of channels that all Edge Processors can use to receive data from Splunk forwarders. The number of channels that all Edge Processors can use to receive data from Splunk forwarders. 300   Any clues as to why this is happening?
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple me... See more...
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple metrics (like min, max, avg, and percentiles) and pivot HTTP status codes (S) into columns, but the current approach withxyseries is dropping additional values: Min, Max, Avg, P95, P98, P99 The reason why using xyseries - it generates columns dynamically so that my result will contain only available statuses from many available and it count accordingly . Here’s the original working query with join: index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats count as TotalReq, by ApiName, S | xyseries ApiName S, TotalReq | addtotals labelfield=ApiName col=t label="ColumnTotals" fieldname="TotalReq" | join type=left ApiName [ search index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName] | addinfo | eval Availability% = round(100 - ('500'*100/TotalReq), ‌ ‌ | fillnull value=100 Availability% | eval range = info_max_time - info_min_time | eval AvgTPS=round(TotalReq/range,5) | eval Avg=floor(Avg) | eval P95=floor(P95) | eval P98=floor(P98) | eval P99=floor(P99) | sort TotalReq | table ApiName, 1*, 2*, 3*, 4*, 5*, Min, Max, Avg, P95, P98, P99, AvgTPS, Availability%, TotalReq I attempted to optimize it by combining the metrics calculation into a single stats command and usingeventstats or streamstats to calculate the additional statistics without dropping the required fields.  Also providing additional metrics with xyseries as below but did not help. PS: Tried with chatGPT did not help. so seeking help from real experts   | stats count as TotalReq, min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName, S | xyseries ApiName S, TotalReq, Min, Max, Avg, P95, P98, P99
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event th... See more...
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event that contains an artifact. When I moved it over to run on the timer label it dies when it gets to my filter block. I've also run the exact same playbook on an event in my test_label which also didn't contain an artifact and that too fails. I've tested it without the filter block and used a decision instead, that works fine. Both blocks share the same Scope in the Advanced settings drop down. My conditions are fine in the filter block and should evaluate to True, I added a test condition on the label name to make sure of this and even that is not triggering.  I think this may be a bug, I'm open to being wrong but not sure what else I can do to test it.    Thanks I believe this is a bug with SOAR. 
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard beca... See more...
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard because the number of rows can be different per period. How can I do this without changing the layout every month?
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP... See more...
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP, the lookup used as input into the asset framework is updated accordingly, but the merged asset lookup "asset_lookup_by_str" will contain both the new and the old IP. So the new IP is appended on the asset, it's not replacing the old IP. Due to "merge magic" that runs under the hood in the asset framework, over time this creates strange assets with many DNS names and many IPs. My question is, how long are asset list field values stored in the Splunk ES asset list framework? Are there any hidden values that keep track of say an IP, and will Splunk eventually remove the IP from the asset in the merged list? Or will the IP stay there forever, and these "multivalue assets" will thus just grow with more and more DNS names and IPs until the mv field limits are reached? And, if I reduce the asset list mv field limits, how does Splunk prioritize what values will be included or not? Does the values already on the merged list have priority, or does any new values have priority? Tried looking for answers in the documentation but could not find answers on my questions there. Hoping someone will share some insights here. Thanks!
I have setup splunk, the machine has 15:26 as local time, but when I check splunkd.log time it is 20:26.   why is there a difference in time b/w local time and splunkd.log time?
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a applicatio... See more...
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a application specific search I could just specify the service in the search. But what I want to achieve is to use a service id from event rather than a fixed value to suppress results for that specific service. If I append  | `filter_maintenance_services("e5095542-9132-402f-8f17-242b83710b66")` to the search it works but if I use the event data service id it does not. Ex.  | `filter_maintenance_services($fields.ServiceID$)` I suspect that it has to do with  fields.ServiceID not being populated when the filter is deployed. How can get this to work?  
our Splunk received logs from Vmware workspace one (mobile device management (MDM)) as syslog messages. what is the source type  needed to be configured in inputs.conf or is there any addon to assis... See more...
our Splunk received logs from Vmware workspace one (mobile device management (MDM)) as syslog messages. what is the source type  needed to be configured in inputs.conf or is there any addon to assist In parsing? 
Hi Hi Team, I am getting the below error message on my splunk ES search head. Is there any troubleshooting I can perform on the splunk web to correct this. Please help. PS. I don't have access to ... See more...
Hi Hi Team, I am getting the below error message on my splunk ES search head. Is there any troubleshooting I can perform on the splunk web to correct this. Please help. PS. I don't have access to the backend.  
Dynamic Alert recipient for test in detector mainly using custom properties in alert recipients tab in detectors. unable to crack that!
Hi All, We are in the process of onboarding logs from a centralized log server, where all endpoints forward their logs. We have installed a Splunk Heavy Forwarder on the server to monitor and forwar... See more...
Hi All, We are in the process of onboarding logs from a centralized log server, where all endpoints forward their logs. We have installed a Splunk Heavy Forwarder on the server to monitor and forward these logs to the Indexers. I would like to know if there are any default sourcetypes available for data sources such as systemd.log and sudo.log  
I am using JAVA SDK to display data on screen. There was no error in version 1.6.0, which I initially used. However, after updating to 1.6.3, the following error appeared. This error is "java.l... See more...
I am using JAVA SDK to display data on screen. There was no error in version 1.6.0, which I initially used. However, after updating to 1.6.3, the following error appeared. This error is "java.lang.NumberFormatException: multiple points". This happens randomly when a service connects or a job is performed. 2024-10-21 12:16:53.899 ERROR 2732 --- [nio-8090-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] threw exception   java.lang.NumberFormatException: multiple points at java.base/jdk.internal.math.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1890) ~[na:na] Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:  Assembly trace from producer [reactor.core.publisher.MonoCompletionStage] : reactor.core.publisher.Mono.fromCompletionStage(Mono.java:549) org.springframework.core.ReactiveAdapterRegistry$ReactorRegistrar.lambda$registerAdapters$4(ReactiveAdapterRegistry.java:241) Error has been observed at the following site(s): |_ Mono.fromCompletionStage ⇢ at org.springframework.core.ReactiveAdapterRegistry$ReactorRegistrar.lambda$registerAdapters$4(ReactiveAdapterRegistry.java:241) Stack trace: at java.base/jdk.internal.math.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1890) ~[na:na] at java.base/jdk.internal.math.FloatingDecimal.parseDouble(FloatingDecimal.java:110) ~[na:na] at java.base/java.lang.Double.parseDouble(Double.java:543) ~[na:na] at java.base/java.text.DigitList.getDouble(DigitList.java:169) ~[na:na] at java.base/java.text.DecimalFormat.parse(DecimalFormat.java:2126) ~[na:na] at java.base/java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1933) ~[na:na] at java.base/java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1541) ~[na:na] at java.base/java.text.DateFormat.parse(DateFormat.java:393) ~[na:na] at com.splunk.Value.toDate(Value.java:109) ~[splunk-1.6.3.0.jar:1.6.3] at com.splunk.Resource.load(Resource.java:166) ~[splunk-1.6.3.0.jar:1.6.3] at com.splunk.Entity.load(Entity.java:356) ~[splunk-1.6.3.0.jar:1.6.3] at com.splunk.Job.refresh(Job.java:940) ~[splunk-1.6.3.0.jar:1.6.3] at com.splunk.JobCollection.create(JobCollection.java:90) ~[splunk-1.6.3.0.jar:1.6.3] at com.splunk.JobCollection.create(JobCollection.java:108) ~[splunk-1.6.3.0.jar:1.6.3] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.3.4.jar:5.3.4] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:779) ~[spring-aop-5.3.4.jar:5.3.4] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.3.4.jar:5.3.4] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750) ~[spring-aop-5.3.4.jar:5.3.4] at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115) ~[spring-aop-5.3.4.jar:5.3.4] at org.springframework.aop.interceptor.AsyncExecutionAspectSupport.lambda$doSubmit$3(AsyncExecutionAspectSupport.java:276) ~[spring-aop-5.3.4.jar:5.3.4] at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1700) ~[na:na] at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java) ~[na:na   Has anyone solved this error?  
Hi, I have an log which show currency field and it will have all the valid currency codes like JPY, CNY, USD etc.. I need to add a dropdown on top with currency value, but my  query i should differ... See more...
Hi, I have an log which show currency field and it will have all the valid currency codes like JPY, CNY, USD etc.. I need to add a dropdown on top with currency value, but my  query i should differentiate between local and foreign currency, for example user have to search by selecting 1st option as JPY and another option should list me all the other currency except JPY, I am not sure if this possible in splunk, need experts advice here. Currency Amount Card Brand JPY 100 XXX CNY 100 XYZ INR 100 UUU
In the new update of TrendVision One Splunk for XDR, there is a new input configuration called 'Detection.' However, I am confused about whether OAT or Detection should be enabled, as they cannot be ... See more...
In the new update of TrendVision One Splunk for XDR, there is a new input configuration called 'Detection.' However, I am confused about whether OAT or Detection should be enabled, as they cannot be enabled simultaneously. Which one should be enabled in both cases?
Hi all, I am trying to understand data in sourcetype=pan:hipmatch for a VPN posture check use case. Has anyone developed or know of any dashboards developed on pan:hip match data and what fields can... See more...
Hi all, I am trying to understand data in sourcetype=pan:hipmatch for a VPN posture check use case. Has anyone developed or know of any dashboards developed on pan:hip match data and what fields can be use to correlate it with pan:globalprotect. Appreciate any pointers
I am trying to deploy SH cluster, but when I run below command    ./splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<management_port> -replication_port <replication_port> -... See more...
I am trying to deploy SH cluster, but when I run below command    ./splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<management_port> -replication_port <replication_port> -replication_factor <n> -conf_deploy_fetch_url <URL>:<management_port> -secret <security_key> -shcluster_label <label>   But I am getting below error WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Login failed but when I do below config  I get below error [sslConfig] cliVerifyServerName = true sslVerifyServerCert = true ERROR: certificate validation: self signed certificate in certificate chain Couldn't complete HTTP request: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed
Hello, I am writing to ask from which point regarding the EPS OR Daily ingested GB/day and the number of users simultaneously access the search head. at what point should i consider a cluster searc... See more...
Hello, I am writing to ask from which point regarding the EPS OR Daily ingested GB/day and the number of users simultaneously access the search head. at what point should i consider a cluster search head cluster, as it will be (one-single SH ) OR (three SH + Deployer)? from your technical perspective?    
Hi, I'm interested to know more about RBA Navigator, anyone have the communication method to Matt Snyder the app creator? I would like to know more information about the list of available features,... See more...
Hi, I'm interested to know more about RBA Navigator, anyone have the communication method to Matt Snyder the app creator? I would like to know more information about the list of available features, Use Cases (if possible), and installation guide. Thanks.