All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If the problem is solved, please select the answer and close.  Karma for all that helped advance the solution is also appreciated.
JSON array must first be converted to multivalue before you can use mv-functions.  The classic method to do this is mvexpand together with spath.   | spath input=spec path=spec.containers{} | mvexp... See more...
JSON array must first be converted to multivalue before you can use mv-functions.  The classic method to do this is mvexpand together with spath.   | spath input=spec path=spec.containers{} | mvexpand spec.containers{} | spath input=spec.containers{} | where privileged == "true"   With your sample data, output is like name privileged spec.containers{} spec.field1 spec.field2 spec.field3 A true { "name": "A", "privileged": "true" } X Y Z C true { "name": "C", "privileged": "true" } X Y Z If the array is big and events are many, mvexpand risk running out of memory.  So, Splunk 8 introduced a group of JSON functions.  The following is more memory efficient (and likely more efficient in general), but the output is multivalued.   | spath input=spec path=spec.containers{} | fields - spec.containers{}.* | eval privileged_containers = mvfilter(json_extract('spec.containers{}', "privileged") == "true")   Your sample data would give something like privileged_containers spec.containers{} spec.field1 spec.field2 spec.field3 { "name": "A", "privileged": "true" } { "name": "C", "privileged": "true" } { "name": "A", "privileged": "true" } { "name": "B" } { "name": "C", "privileged": "true" } X Y Z   BTW, please post JSON in raw text and make sure the format is compliant with proper quotes, etc. so volunteers don't have to waste time reconstruct data or worry about noncompliant data.  Here is a compliant emulation that you can play with and compare with real data   | makeresults | eval _raw = "{\"spec\": { \"field1\": \"X\", \"field2\": \"Y\", \"field3\": \"Z\", \"containers\": [ { \"name\": \"A\", \"privileged\": \"true\" }, { \"name\": \"B\" }, { \"name\": \"C\", \"privileged\": \"true\" } ] }}" | spath ``` data emulation above ```   Hope this helps.
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C ... See more...
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C privileged: true } ] }       I'm trying to write a query that only returns privileged containers. I've been trying to use mvfilter but that won't return the name of the container. Here's what I was trying to do:       index=MyIndex spec.containers{}.privileged=true | eval priv_containers=mvfilter(match('spec.containers{}.privileged',"true")) | stats values(priv_containers) count by field1, field2, field3       This will, however, just return "true" in the priv_containers values column, instead of the container's name. What would be the best way to accomplish that?
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see... See more...
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see if there is a sudden drop of some sort of total.  So, I set up a condition after timechart like this: | addtotals | delta "Total" as delta | foreach * [eval <<FIELD>> = if(-delta > Total OR Total < 5000, null(), '<<FIELD>>')] The algorithm works well for Total, and for some series in timechart, but not for all, not all the time. Here are two emulations using index=_internal on my laptop.  One groups by source, the other groups by sourcetype. index=_internal earliest=-7d | timechart span=2h count by source ``` data emulation 1 ``` With group by source, all series seem to blank out as expected. Now, I can run the same tally by sourcetype, like thus index=_internal earliest=-7d | timechart span=2h count by sourcetype ``` data emulation 2 ``` This time, all gaps have at least one series that is not null; some series go to zero instead of null, some even obviously above zero. What is the determining factor here? If you have suggestion about alternative approaches, I would also appreciate.
By applying these settings in props.conf the prefix was removed and events are parsing as expected. SEDCMD-1=s/^[^\{]+{"records":\s+\[\{"value":\s// SEDCMD-2=s/}\]\}//  
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orp... See more...
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orphan_field=yes count=0 However when looking at the searches pulled in searches > Reports and Alerting they do not show up. There are also zero saved searches viewable under Settings > All Configurations > Reassign Knowledge Objects > Orphaned (with all filters on all)   We are trying to reassign these searches via Rest with the following example syntax: curl -sk -H 'Authorization: Bearer <token>' -d 'owner=<name of valid owner>'   https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29 But are receiving  the following error This is not an issue with the id as the following is able to pull saved search info. curl -sk -H 'Authorization: Bearer <token>'  https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29   Does anyone have a better syntax to use to post this owner change to the saved searches?
If i use the following command, it truncates everything in the second query so that I only have one unique value for every other fields index=main measurement_type=loadTime screen_class_name=HomeFra... See more...
If i use the following command, it truncates everything in the second query so that I only have one unique value for every other fields index=main measurement_type=loadTime screen_class_name=HomeFragment | join sessionId [search index=api_analytics sourcetype=api_analytics | `expand_api_analytics` | search iri=home/header | spath input=analytic path=session_id output=sessionId]  
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i c... See more...
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i can see the load time based on header colour? Query for Load time:   index="main" measurement_type=loadTime screen_class_name=HomeFragment    Query for HeaderColour:   index=api_analytics sourcetype=api_analytics | `expand_api_analytics` | search iri=home/header | spath input=analytic path=session_id output=session_id  
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.08... See more...
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.088Z 775ab4c6-ccc3-600b-9c84-124320628f00 {"records": [{"value": {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}}]} The expected output would be  {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", ... See more...
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", "user": { "userName": "username", "role": "ADMIN" } }, "vulnerabilities": [ { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" }, { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" } ] }   I am trying to ingest just the vulnerabilities. It works when I try it in Splunk UI but when I save it in my props.conf file it doesn't split it correctly and the id from one section gets appended to the end of the previous one.   Here is what I am trying. [sourcetype] LINE_BREAKER = }(,[\r\n]+) SHOULD_LINEMERGE = false NO_BINARY_CHECK = 1
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any... See more...
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any reference to my app so far at the server. any recommendation would make me really grateful My case is similar to this one, but im on v23 python agent https://stackoverflow.com/questions/69161757/do-appdynamics-python-agent-supports-barebone-python-scripts-w-o-wsgi-or-gunicor https://docs.appdynamics.com/appd/23.x/23.9/en/application-monitoring/install-app-server-agents/python-agent/python-supported-environments I dont need necessarily metrics monitoring, but i do really need to monitor the events happening in the script Do you folks have any suggestion if its possible that AppDynamics agent hook only on asyncio library, performing or simulating , the same layer that the java proxy agent is sniffing into the python VM? . Is it possible to send this 'stimulti' straight to the another java program , that would do the BT call? if https://rob-blackbourn.medium.com/asgi-event-loop-gotcha-76da9715e36d https://www.uvicorn.org/#fastapi https://docs.appdynamics.com/appd/21.x/21.4/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/instrument-jvms-started-by-batch-or-cron-jobs Finally,  i could event dare to look further to use opentelemetry for my case, in order to collect the main points Is opentelemetry a standard feature for AppDynamics? is it extra paid option? https://github.com/Appdynamics/opentelemetry-collector-contrib  https://opentelemetry.io/docs/instrumentation/python/automatic/ https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/logging/logging.html
You are still thinking in SQL terms.  Splunk's "schema on the fly" means there is no predefined "primary key" in any data.  Any key, or combination of keys, or evaluation expression with any key or c... See more...
You are still thinking in SQL terms.  Splunk's "schema on the fly" means there is no predefined "primary key" in any data.  Any key, or combination of keys, or evaluation expression with any key or combination of keys, can be used as "primary key". So for example, if I have 3 events with that field value in dataset A and 4 events with that particular field value in dataset B, then I expect to have 12 events in the result dataset (after the merge). What Splunk command/s would be useful to merge these datasets in this fashion?  It is much more profitable for you to share some sample data (anonymize as necessary, or use proper mock data, in text), then describe the desired output.  From your verbal description, it seems that you want an outer join of sorts. The simplest, and perhaps the fastest command would be stats.  To illustrate, I assume that you want to preserve all other fields in both datasets. (<dataset A>) OR (<dataset B>) | stats values(*) as * by not_a_primary_key_but_is_common_in_both_A_and_B Hope this helps
This works and makes total sense. First initialize values in the lookup table and then craft an appended search that will perform stats from the search and populate the lookup table. It's Gold Jerry... See more...
This works and makes total sense. First initialize values in the lookup table and then craft an appended search that will perform stats from the search and populate the lookup table. It's Gold Jerry, Gold Jerry!    
There should be a sequence_number field in the config logs that can be correlated with the other logs of the same number to list the changes made to firewall rules
For anyone else who comes across this issue, an updated version of the tabs.js and tabs.css can be found directly at the blog author's github repo: splunk-dashboard-tabs-example/src/appserver/static/... See more...
For anyone else who comes across this issue, an updated version of the tabs.js and tabs.css can be found directly at the blog author's github repo: splunk-dashboard-tabs-example/src/appserver/static/tabs.js at master · LukeMurphey/splunk-dashboard-tabs-example · GitHub The most recent version has been updated to work with jquery 3.5.
ok,Will try to explain.. Actually, we are having predefined time range filter as "previous month" in time filter.. previous month= -1mon@mon and @mon .(ex: august month) When I am loading the dash... See more...
ok,Will try to explain.. Actually, we are having predefined time range filter as "previous month" in time filter.. previous month= -1mon@mon and @mon .(ex: august month) When I am loading the dashboard, I am able to see different data and user from different location, they used to see different data. becoz of -1mon@mon condition used in the time range filter..(taking different epoc time, as starting day of me and other location user is different). So, we tried to use the below query, | eval lnum=if(match("1690848000","^[@a-zA-Z]+"),"str","num"), enum=if(match("1688169600","[a-zA-Z]"),"str","num") | eval latest=case(isnum(1690848000),(1690848000-60),"1690848000"="now",now(),"1690848000"="",now(),lnum!="str","1690848000",1=1,relative_time(now(), "1690848000")) | eval earliest=case(isnum(1688169600),(1688169600-60),"1688169600"="0","0",enum!="str","1688169600",1=1,relative_time(now(), "1688169600")) In this query, if I use without isnum condition , it is working for previous month( same for all location users), if i include isnum condition , its not working. Becoz, isnum is not accepting double quotes(isnum(123), not isnum("123")) and previous month condition(1mon@mon, @mon) is not accepting without double quotes. This the issue we are facing, Can you pls suggest any change in query or any suggestions to fix this?? @yuanliu  And also one other query(for other requirement), how can remove the presets from time range filter?   Highly appreciated if this issue got resolved          
@michael_vi First, thank you for the help! I was able to get this script updated here and it puts it in a alphabetical list. Thank you again import requests import json # Splunkbase API endpoin... See more...
@michael_vi First, thank you for the help! I was able to get this script updated here and it puts it in a alphabetical list. Thank you again import requests import json # Splunkbase API endpoint to get a list of all apps splunkbase_api_url = "https://splunkbase.splunk.com/api/v1/app/" # Initialize an empty list to store all apps all_apps = [] # Initialize offset and batch size offset = 0 batch_size = 25  # You can adjust this if needed while True:     # Construct the URL with the current offset     params = {"offset": offset}     response = requests.get(splunkbase_api_url, params=params)         if response.status_code == 200:         # Parse the JSON response to access the list of apps         apps = response.json()                 # Add the retrieved apps to the list         all_apps.extend(apps['results'])  # Use ['results'] to get the apps                 # Calculate the total number of apps from the response         total_apps = apps['total']                 # If we have collected all apps, break the loop         if offset >= total_apps:             break                 # Increase the offset for the next request         offset += batch_size     else:         print("Failed to retrieve apps. Status code:", response.status_code)         break # Extract 'appid' (app names) from the 'results' app_names = [app['appid'] for app in all_apps] # Sort the app names in alphabetical order app_names_sorted = sorted(app_names) # Create a dictionary with the sorted app names as a list app_data = {"app_names": app_names_sorted} # Save the app data to a JSON file as before output_file_path = "splunkbase_apps.json" with open(output_file_path, 'w') as json_file:     json.dump(app_data, json_file, indent=4)  # Use indent to format JSON print(f"App data has been saved to {output_file_path}")
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, a... See more...
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, and therefore there's multiple events in each of these datasets with a given value of this field. My expected result is that for each event in the first dataset with a particular value of that field, I will end up producing n events in the resulting dataset, where n is the number of events in the second dataset that have that particular value in the field. So for example, if I have 3 events with that field value in dataset A and 4 events with that particular field value in dataset B, then I expect to have 12 events in the result dataset (after the merge). What Splunk command/s would be useful to merge these datasets in this fashion? 
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, ... See more...
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, searching all over google and the splunk community page I have looked at multiple errors similar from other apps and none of the solutions helped. Permissions are correct as well any help would be greatly appreciated!  The full error is "Unable to initialize modular input "redfish" defined in the app "TA-redfish-add-on-for-splunk" : introspecting scheme=redfish: script running failed (PID 4535 exited with code 1)"
Hi @swayam.pattanayak, Given how old this post is and it did not get a reply, you may want to contact AppD Support for more help at this time. How do I submit a Support ticket? An FAQ