All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer send... See more...
Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer sending namespace o Got this plot and chart options   Thanks  
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the ab... See more...
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the above example, the second is that the title is CC, so even if the id value is the same, it is not counted. there is 1 id 111 in index B, So the answer I want is 1. How do I look up the query?
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just s... See more...
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just see the events corresponding to domain=X, I get zero events:  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="X"   Any clue why this might be happening
I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , sec... See more...
I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , second one is when we receive response back from kafka. I have unique message ID to track both event. I want to capture average processing time for all unique ID. In below query I have not added condition for unique ID. in below query I am not getting "Diffrence" value.  Can you please help !!  index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as Response_Kafka | eval difference=Response_Kafka-Msg_received | eval difference=strftime(difference,"%d-%m-%Y %H:%M:%S") | eval Msg_received=strftime(Msg_received,"%d-%m-%Y %H:%M:%S") | eval Response_Kafka=strftime(Response_Kafka,"%d-%m-%Y %H:%M:%S")      
Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for... See more...
Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for Microsoft 365, encompassing elements such as Intune, Defender, and more. Could you please provide advice on how to effectively obtain all the security logs for Microsoft 365?
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C ... See more...
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C privileged: true } ] }       I'm trying to write a query that only returns privileged containers. I've been trying to use mvfilter but that won't return the name of the container. Here's what I was trying to do:       index=MyIndex spec.containers{}.privileged=true | eval priv_containers=mvfilter(match('spec.containers{}.privileged',"true")) | stats values(priv_containers) count by field1, field2, field3       This will, however, just return "true" in the priv_containers values column, instead of the container's name. What would be the best way to accomplish that?
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see... See more...
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see if there is a sudden drop of some sort of total.  So, I set up a condition after timechart like this: | addtotals | delta "Total" as delta | foreach * [eval <<FIELD>> = if(-delta > Total OR Total < 5000, null(), '<<FIELD>>')] The algorithm works well for Total, and for some series in timechart, but not for all, not all the time. Here are two emulations using index=_internal on my laptop.  One groups by source, the other groups by sourcetype. index=_internal earliest=-7d | timechart span=2h count by source ``` data emulation 1 ``` With group by source, all series seem to blank out as expected. Now, I can run the same tally by sourcetype, like thus index=_internal earliest=-7d | timechart span=2h count by sourcetype ``` data emulation 2 ``` This time, all gaps have at least one series that is not null; some series go to zero instead of null, some even obviously above zero. What is the determining factor here? If you have suggestion about alternative approaches, I would also appreciate.
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orp... See more...
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orphan_field=yes count=0 However when looking at the searches pulled in searches > Reports and Alerting they do not show up. There are also zero saved searches viewable under Settings > All Configurations > Reassign Knowledge Objects > Orphaned (with all filters on all)   We are trying to reassign these searches via Rest with the following example syntax: curl -sk -H 'Authorization: Bearer <token>' -d 'owner=<name of valid owner>'   https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29 But are receiving  the following error This is not an issue with the id as the following is able to pull saved search info. curl -sk -H 'Authorization: Bearer <token>'  https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29   Does anyone have a better syntax to use to post this owner change to the saved searches?
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i c... See more...
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i can see the load time based on header colour? Query for Load time:   index="main" measurement_type=loadTime screen_class_name=HomeFragment    Query for HeaderColour:   index=api_analytics sourcetype=api_analytics | `expand_api_analytics` | search iri=home/header | spath input=analytic path=session_id output=session_id  
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.08... See more...
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.088Z 775ab4c6-ccc3-600b-9c84-124320628f00 {"records": [{"value": {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}}]} The expected output would be  {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", ... See more...
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", "user": { "userName": "username", "role": "ADMIN" } }, "vulnerabilities": [ { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" }, { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" } ] }   I am trying to ingest just the vulnerabilities. It works when I try it in Splunk UI but when I save it in my props.conf file it doesn't split it correctly and the id from one section gets appended to the end of the previous one.   Here is what I am trying. [sourcetype] LINE_BREAKER = }(,[\r\n]+) SHOULD_LINEMERGE = false NO_BINARY_CHECK = 1
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any... See more...
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any reference to my app so far at the server. any recommendation would make me really grateful My case is similar to this one, but im on v23 python agent https://stackoverflow.com/questions/69161757/do-appdynamics-python-agent-supports-barebone-python-scripts-w-o-wsgi-or-gunicor https://docs.appdynamics.com/appd/23.x/23.9/en/application-monitoring/install-app-server-agents/python-agent/python-supported-environments I dont need necessarily metrics monitoring, but i do really need to monitor the events happening in the script Do you folks have any suggestion if its possible that AppDynamics agent hook only on asyncio library, performing or simulating , the same layer that the java proxy agent is sniffing into the python VM? . Is it possible to send this 'stimulti' straight to the another java program , that would do the BT call? if https://rob-blackbourn.medium.com/asgi-event-loop-gotcha-76da9715e36d https://www.uvicorn.org/#fastapi https://docs.appdynamics.com/appd/21.x/21.4/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/instrument-jvms-started-by-batch-or-cron-jobs Finally,  i could event dare to look further to use opentelemetry for my case, in order to collect the main points Is opentelemetry a standard feature for AppDynamics? is it extra paid option? https://github.com/Appdynamics/opentelemetry-collector-contrib  https://opentelemetry.io/docs/instrumentation/python/automatic/ https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/logging/logging.html
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, a... See more...
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, and therefore there's multiple events in each of these datasets with a given value of this field. My expected result is that for each event in the first dataset with a particular value of that field, I will end up producing n events in the resulting dataset, where n is the number of events in the second dataset that have that particular value in the field. So for example, if I have 3 events with that field value in dataset A and 4 events with that particular field value in dataset B, then I expect to have 12 events in the result dataset (after the merge). What Splunk command/s would be useful to merge these datasets in this fashion? 
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, ... See more...
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, searching all over google and the splunk community page I have looked at multiple errors similar from other apps and none of the solutions helped. Permissions are correct as well any help would be greatly appreciated!  The full error is "Unable to initialize modular input "redfish" defined in the app "TA-redfish-add-on-for-splunk" : introspecting scheme=redfish: script running failed (PID 4535 exited with code 1)"
I need to get the  list of Adhoc Searches and Saved search running by user in Audit logs. how to differentiate these searches in _audit logs, is there any specific keyword to identify the searches 
Hello Splunkers ! I am looking for a way to monitor and retrieve the user that logged into my Linux machine, but only the user part of the root or wheel groups, or who are present in any sudoers f... See more...
Hello Splunkers ! I am looking for a way to monitor and retrieve the user that logged into my Linux machine, but only the user part of the root or wheel groups, or who are present in any sudoers file. I was able to get the user who ssh into my machine using the 'var/log/secure' file but my challenge was to check if the user "have a lot of rights or not". I have some idea in mind to achieve this but maybe there were something out of the box with Splunk or the TA Nix Add-On.... If somebody already tried to so something similar, any help would be appreciated ! Thanks ! GaetanVP
I have a search query that takes a search value from a drop down.  Example Drop down has values All A B Query uses  | where productType="$dropdown$" How do I remove the where clause if All is... See more...
I have a search query that takes a search value from a drop down.  Example Drop down has values All A B Query uses  | where productType="$dropdown$" How do I remove the where clause if All is selected. There is no productType - All
Hi Team,  I am trying to install the Machine Agent on a Unix Machine hosted in AWS .  Installation is fine, i can see the Appd Machine Agent service up and running. But it is not getting registered ... See more...
Hi Team,  I am trying to install the Machine Agent on a Unix Machine hosted in AWS .  Installation is fine, i can see the Appd Machine Agent service up and running. But it is not getting registered to the controller .  When i check the logs, it shows up a Timed out error.  Is there anything specific i need to do for AWS Ec2 instances that has machine agents installed there 
My splunk instance is running in GMT and I want to schedule an alert as per China time.  */5 21-23,0-13 * * 0-5 This is the cron. The logic is to trigger the alert every 5minutes from Monday to frid... See more...
My splunk instance is running in GMT and I want to schedule an alert as per China time.  */5 21-23,0-13 * * 0-5 This is the cron. The logic is to trigger the alert every 5minutes from Monday to friday 5AM till 10 PM china Time but the alert is getting triggered on Sunday as well. How can we cutomise the cron?  
Hi, I am attempting to determine if Splunk is installed on all of our local systems within our environment. Is there a way to check this through Tags, the Windows Registry (regedit), or ParentProce... See more...
Hi, I am attempting to determine if Splunk is installed on all of our local systems within our environment. Is there a way to check this through Tags, the Windows Registry (regedit), or ParentProcessname or a PowerShell script? If so, could you please provide guidance on the process? Thanks