All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , sec... See more...
I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , second one is when we receive response back from kafka. I have unique message ID to track both event. I want to capture average processing time for all unique ID. In below query I have not added condition for unique ID. in below query I am not getting "Diffrence" value.  Can you please help !!  index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as Response_Kafka | eval difference=Response_Kafka-Msg_received | eval difference=strftime(difference,"%d-%m-%Y %H:%M:%S") | eval Msg_received=strftime(Msg_received,"%d-%m-%Y %H:%M:%S") | eval Response_Kafka=strftime(Response_Kafka,"%d-%m-%Y %H:%M:%S")      
Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for... See more...
Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for Microsoft 365, encompassing elements such as Intune, Defender, and more. Could you please provide advice on how to effectively obtain all the security logs for Microsoft 365?
Your foreach eval statement is wrong, it should test for Total and delta fields [eval <<FIELD>> = if("<<MATCHSTR>>"!="Total" AND "<<MATCHSTR>>"!="delta" AND -delta > Total OR Total < 5000, null(), '... See more...
Your foreach eval statement is wrong, it should test for Total and delta fields [eval <<FIELD>> = if("<<MATCHSTR>>"!="Total" AND "<<MATCHSTR>>"!="delta" AND -delta > Total OR Total < 5000, null(), '<<FIELD>>')] You are not excluding 'delta' and 'Total' fields from the eval, so Total is set to null() before you process the other fields, so breaks the eval for subsequent passes.
Hi @gcusello,  Thanks for your inputs. A follow up question, Do I have to expect any data loss during the upgrade? or is the add-on capable of backfilling the data lost during the time of upgrade. ... See more...
Hi @gcusello,  Thanks for your inputs. A follow up question, Do I have to expect any data loss during the upgrade? or is the add-on capable of backfilling the data lost during the time of upgrade. Also do I need to restart splunk on my server for the changes to reflect?
If the problem is solved, please select the answer and close.  Karma for all that helped advance the solution is also appreciated.
JSON array must first be converted to multivalue before you can use mv-functions.  The classic method to do this is mvexpand together with spath.   | spath input=spec path=spec.containers{} | mvexp... See more...
JSON array must first be converted to multivalue before you can use mv-functions.  The classic method to do this is mvexpand together with spath.   | spath input=spec path=spec.containers{} | mvexpand spec.containers{} | spath input=spec.containers{} | where privileged == "true"   With your sample data, output is like name privileged spec.containers{} spec.field1 spec.field2 spec.field3 A true { "name": "A", "privileged": "true" } X Y Z C true { "name": "C", "privileged": "true" } X Y Z If the array is big and events are many, mvexpand risk running out of memory.  So, Splunk 8 introduced a group of JSON functions.  The following is more memory efficient (and likely more efficient in general), but the output is multivalued.   | spath input=spec path=spec.containers{} | fields - spec.containers{}.* | eval privileged_containers = mvfilter(json_extract('spec.containers{}', "privileged") == "true")   Your sample data would give something like privileged_containers spec.containers{} spec.field1 spec.field2 spec.field3 { "name": "A", "privileged": "true" } { "name": "C", "privileged": "true" } { "name": "A", "privileged": "true" } { "name": "B" } { "name": "C", "privileged": "true" } X Y Z   BTW, please post JSON in raw text and make sure the format is compliant with proper quotes, etc. so volunteers don't have to waste time reconstruct data or worry about noncompliant data.  Here is a compliant emulation that you can play with and compare with real data   | makeresults | eval _raw = "{\"spec\": { \"field1\": \"X\", \"field2\": \"Y\", \"field3\": \"Z\", \"containers\": [ { \"name\": \"A\", \"privileged\": \"true\" }, { \"name\": \"B\" }, { \"name\": \"C\", \"privileged\": \"true\" } ] }}" | spath ``` data emulation above ```   Hope this helps.
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C ... See more...
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C privileged: true } ] }       I'm trying to write a query that only returns privileged containers. I've been trying to use mvfilter but that won't return the name of the container. Here's what I was trying to do:       index=MyIndex spec.containers{}.privileged=true | eval priv_containers=mvfilter(match('spec.containers{}.privileged',"true")) | stats values(priv_containers) count by field1, field2, field3       This will, however, just return "true" in the priv_containers values column, instead of the container's name. What would be the best way to accomplish that?
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see... See more...
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see if there is a sudden drop of some sort of total.  So, I set up a condition after timechart like this: | addtotals | delta "Total" as delta | foreach * [eval <<FIELD>> = if(-delta > Total OR Total < 5000, null(), '<<FIELD>>')] The algorithm works well for Total, and for some series in timechart, but not for all, not all the time. Here are two emulations using index=_internal on my laptop.  One groups by source, the other groups by sourcetype. index=_internal earliest=-7d | timechart span=2h count by source ``` data emulation 1 ``` With group by source, all series seem to blank out as expected. Now, I can run the same tally by sourcetype, like thus index=_internal earliest=-7d | timechart span=2h count by sourcetype ``` data emulation 2 ``` This time, all gaps have at least one series that is not null; some series go to zero instead of null, some even obviously above zero. What is the determining factor here? If you have suggestion about alternative approaches, I would also appreciate.
By applying these settings in props.conf the prefix was removed and events are parsing as expected. SEDCMD-1=s/^[^\{]+{"records":\s+\[\{"value":\s// SEDCMD-2=s/}\]\}//  
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orp... See more...
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orphan_field=yes count=0 However when looking at the searches pulled in searches > Reports and Alerting they do not show up. There are also zero saved searches viewable under Settings > All Configurations > Reassign Knowledge Objects > Orphaned (with all filters on all)   We are trying to reassign these searches via Rest with the following example syntax: curl -sk -H 'Authorization: Bearer <token>' -d 'owner=<name of valid owner>'   https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29 But are receiving  the following error This is not an issue with the id as the following is able to pull saved search info. curl -sk -H 'Authorization: Bearer <token>'  https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29   Does anyone have a better syntax to use to post this owner change to the saved searches?
If i use the following command, it truncates everything in the second query so that I only have one unique value for every other fields index=main measurement_type=loadTime screen_class_name=HomeFra... See more...
If i use the following command, it truncates everything in the second query so that I only have one unique value for every other fields index=main measurement_type=loadTime screen_class_name=HomeFragment | join sessionId [search index=api_analytics sourcetype=api_analytics | `expand_api_analytics` | search iri=home/header | spath input=analytic path=session_id output=sessionId]  
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i c... See more...
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i can see the load time based on header colour? Query for Load time:   index="main" measurement_type=loadTime screen_class_name=HomeFragment    Query for HeaderColour:   index=api_analytics sourcetype=api_analytics | `expand_api_analytics` | search iri=home/header | spath input=analytic path=session_id output=session_id  
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.08... See more...
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.088Z 775ab4c6-ccc3-600b-9c84-124320628f00 {"records": [{"value": {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}}]} The expected output would be  {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", ... See more...
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", "user": { "userName": "username", "role": "ADMIN" } }, "vulnerabilities": [ { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" }, { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" } ] }   I am trying to ingest just the vulnerabilities. It works when I try it in Splunk UI but when I save it in my props.conf file it doesn't split it correctly and the id from one section gets appended to the end of the previous one.   Here is what I am trying. [sourcetype] LINE_BREAKER = }(,[\r\n]+) SHOULD_LINEMERGE = false NO_BINARY_CHECK = 1
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any... See more...
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any reference to my app so far at the server. any recommendation would make me really grateful My case is similar to this one, but im on v23 python agent https://stackoverflow.com/questions/69161757/do-appdynamics-python-agent-supports-barebone-python-scripts-w-o-wsgi-or-gunicor https://docs.appdynamics.com/appd/23.x/23.9/en/application-monitoring/install-app-server-agents/python-agent/python-supported-environments I dont need necessarily metrics monitoring, but i do really need to monitor the events happening in the script Do you folks have any suggestion if its possible that AppDynamics agent hook only on asyncio library, performing or simulating , the same layer that the java proxy agent is sniffing into the python VM? . Is it possible to send this 'stimulti' straight to the another java program , that would do the BT call? if https://rob-blackbourn.medium.com/asgi-event-loop-gotcha-76da9715e36d https://www.uvicorn.org/#fastapi https://docs.appdynamics.com/appd/21.x/21.4/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/instrument-jvms-started-by-batch-or-cron-jobs Finally,  i could event dare to look further to use opentelemetry for my case, in order to collect the main points Is opentelemetry a standard feature for AppDynamics? is it extra paid option? https://github.com/Appdynamics/opentelemetry-collector-contrib  https://opentelemetry.io/docs/instrumentation/python/automatic/ https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/logging/logging.html
You are still thinking in SQL terms.  Splunk's "schema on the fly" means there is no predefined "primary key" in any data.  Any key, or combination of keys, or evaluation expression with any key or c... See more...
You are still thinking in SQL terms.  Splunk's "schema on the fly" means there is no predefined "primary key" in any data.  Any key, or combination of keys, or evaluation expression with any key or combination of keys, can be used as "primary key". So for example, if I have 3 events with that field value in dataset A and 4 events with that particular field value in dataset B, then I expect to have 12 events in the result dataset (after the merge). What Splunk command/s would be useful to merge these datasets in this fashion?  It is much more profitable for you to share some sample data (anonymize as necessary, or use proper mock data, in text), then describe the desired output.  From your verbal description, it seems that you want an outer join of sorts. The simplest, and perhaps the fastest command would be stats.  To illustrate, I assume that you want to preserve all other fields in both datasets. (<dataset A>) OR (<dataset B>) | stats values(*) as * by not_a_primary_key_but_is_common_in_both_A_and_B Hope this helps
This works and makes total sense. First initialize values in the lookup table and then craft an appended search that will perform stats from the search and populate the lookup table. It's Gold Jerry... See more...
This works and makes total sense. First initialize values in the lookup table and then craft an appended search that will perform stats from the search and populate the lookup table. It's Gold Jerry, Gold Jerry!    
There should be a sequence_number field in the config logs that can be correlated with the other logs of the same number to list the changes made to firewall rules
For anyone else who comes across this issue, an updated version of the tabs.js and tabs.css can be found directly at the blog author's github repo: splunk-dashboard-tabs-example/src/appserver/static/... See more...
For anyone else who comes across this issue, an updated version of the tabs.js and tabs.css can be found directly at the blog author's github repo: splunk-dashboard-tabs-example/src/appserver/static/tabs.js at master · LukeMurphey/splunk-dashboard-tabs-example · GitHub The most recent version has been updated to work with jquery 3.5.
ok,Will try to explain.. Actually, we are having predefined time range filter as "previous month" in time filter.. previous month= -1mon@mon and @mon .(ex: august month) When I am loading the dash... See more...
ok,Will try to explain.. Actually, we are having predefined time range filter as "previous month" in time filter.. previous month= -1mon@mon and @mon .(ex: august month) When I am loading the dashboard, I am able to see different data and user from different location, they used to see different data. becoz of -1mon@mon condition used in the time range filter..(taking different epoc time, as starting day of me and other location user is different). So, we tried to use the below query, | eval lnum=if(match("1690848000","^[@a-zA-Z]+"),"str","num"), enum=if(match("1688169600","[a-zA-Z]"),"str","num") | eval latest=case(isnum(1690848000),(1690848000-60),"1690848000"="now",now(),"1690848000"="",now(),lnum!="str","1690848000",1=1,relative_time(now(), "1690848000")) | eval earliest=case(isnum(1688169600),(1688169600-60),"1688169600"="0","0",enum!="str","1688169600",1=1,relative_time(now(), "1688169600")) In this query, if I use without isnum condition , it is working for previous month( same for all location users), if i include isnum condition , its not working. Becoz, isnum is not accepting double quotes(isnum(123), not isnum("123")) and previous month condition(1mon@mon, @mon) is not accepting without double quotes. This the issue we are facing, Can you pls suggest any change in query or any suggestions to fix this?? @yuanliu  And also one other query(for other requirement), how can remove the presets from time range filter?   Highly appreciated if this issue got resolved