All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

index=myindex RecordType=abc ClassName IN ( "ClassA", "ClassB", "ClassC") | bucket _time span=1d | stats avg(cpuTime) as avgCpuTime by ClassName _time | xyseries ClassName _time avgCpuTime | eval "%R... See more...
index=myindex RecordType=abc ClassName IN ( "ClassA", "ClassB", "ClassC") | bucket _time span=1d | stats avg(cpuTime) as avgCpuTime by ClassName _time | xyseries ClassName _time avgCpuTime | eval "%Reduction"=round(100*('16-Oct-24'-'17-Oct-24')/'16-Oct-24',0)
Hi @rolfkuper , When you say "nothing happens", you mean literally nothing?  No errors or dialog or anything?  I would have said that you're missing AGREETOLICENSE=Yes on that command line, but I do... See more...
Hi @rolfkuper , When you say "nothing happens", you mean literally nothing?  No errors or dialog or anything?  I would have said that you're missing AGREETOLICENSE=Yes on that command line, but I don't understand why nothing at all would happen... You could try adding the following to the command line, and then maybe there'll be some hints in the log file:   /l*vx msiexec.log   Cheers,    - Jo.  
Iam using splunk to generate as below.It is run for 2 days date range where am trying to compare the count ClassName 16-Oct-24 17-Oct-24 ClassA 544 489 ClassB 39 47 ClassC 193... See more...
Iam using splunk to generate as below.It is run for 2 days date range where am trying to compare the count ClassName 16-Oct-24 17-Oct-24 ClassA 544 489 ClassB 39 47 ClassC 1937 2100   My splunk query is as under index=myindex RecordType=abc ClassName IN ( "ClassA", "ClassB", "ClassC") | bucket _time span=1d | stats avg(cpuTime) as avgCpuTime by ClassName _time | xyseries ClassName _time avgCpuTime I need below output which has an extra column that gives the comparision.How can we tweak this query?Is there another way to achieve this in more visually appealing manner ClassName 16-Oct-24 17-Oct-24 %Reduction ClassA 544 489 10% ClassB 39 47 -21% ClassC 1937 2100 -8%
Hello All, We are encountering an issue with the Splunk Update Password API. When we make a request to update a user’s password, the API returns a 200 OK status code, but instead of the expected... See more...
Hello All, We are encountering an issue with the Splunk Update Password API. When we make a request to update a user’s password, the API returns a 200 OK status code, but instead of the expected JSON response, we are receiving an HTML response. Additionally, despite the successful status code, the password is not being updated on the server. however it was worked earlier, we have verified it's giving issue on both OnPrem and cloud instance. splunk enterprise version: 9.3.1.0 Followed following official documentation: https://docs.splunk.com/Documentation/Splunk/9.3.1/RESTREF/RESTaccess#authentication.2Fusers
Hi.   We are just starting to use Splunk Infrastructure monitoring, and have added the "Splunk Infrastructure Monitoring Add-on".   We have created the connection to Splunk Observability, without... See more...
Hi.   We are just starting to use Splunk Infrastructure monitoring, and have added the "Splunk Infrastructure Monitoring Add-on".   We have created the connection to Splunk Observability, without any problems, and admins can run the search command "| sim flow query=..." without any problems.   The problem arises when a normal user is trying the same command, where we get the following error message: "Error in "sim" command: Splunk Infrastructure Monitoring API Connection not configured."   I have reviewed all permissions I could think of, and except for a couple og view, there is public access. I can't find any new capabilities, that might need to be added.   If anyone can point me in the right direction, it would be greatly appriciated.   Kind regards
Ty. Work greate in 2024
Great, thanks I took my way, doing so,   |eval earliest_epoch="$time.earliest$",latest_epoch="$time.latest$" |eval earliest_epoch=case(isnum(earliest_epoch),earliest_epoch,earliest_epoch=="... See more...
Great, thanks I took my way, doing so,   |eval earliest_epoch="$time.earliest$",latest_epoch="$time.latest$" |eval earliest_epoch=case(isnum(earliest_epoch),earliest_epoch,earliest_epoch=="now",time(),"earliest_epoch"="",0,true(),relative_time(time(),earliest_epoch)) |eval latest_epoch=case(isnum(latest_epoch),latest_epoch,latest_epoch=="now",time(),true(),relative_time(time(),latest_epoch))    
Thank you for your reply.  Our department's policy seems to be to use exporting syslog and forwarding... I referred to this video https://www.youtube.com/watch?v=wS5-jMS080s and I'm trying to mon... See more...
Thank you for your reply.  Our department's policy seems to be to use exporting syslog and forwarding... I referred to this video https://www.youtube.com/watch?v=wS5-jMS080s and I'm trying to monitor syslog over Splunk. However no events displayed on Splunk search. I used Wireshark (tshark), and then confirmed that Splunk server could receive syslog packets. Is there anything else that I should check ?
Hi @jg91 , I don't know your data, maybe there's some numeric field that can be interpretated as a timestamp, or there's a previous event of 2021, I don't know. But using the above configuration yo... See more...
Hi @jg91 , I don't know your data, maybe there's some numeric field that can be interpretated as a timestamp, or there's a previous event of 2021, I don't know. But using the above configuration you should solve. Ciao. Giuseppe
Thank you, but my question is why it defaults to a timestamp from 2021, especially since this is a freshly created container/pod with no prior data ingested. Why is it using that specific date?
Hi @myusufe71 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @myusufe71 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @whipstash , don't use join command that's a very slow command, use a different approach: index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on... See more...
Hi @whipstash , don't use join command that's a very slow command, use a different approach: index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on infoIWant fields here | append [ search index=INDEX sourcetype=sourcetypeB | stats count AS eventcount earliest(_time) AS earliest latest(_time) AS latest BY sessionID | eval duration=latest-earliest | where eventcount=2 | fields sessionID duration ] | stats values(eventID) AS eventID values(duration) AS duration values(count) AS count BY sessionID Please adapt this approach to your real situation. Ciao. Giuseppe
Wow it works. @gcusello you are super duper. Thanks!
Hi @myusufe71 , let me understand: you want to filter results of the main search with the results of the subsearch, is it correct? in this case, please, try this: index=abc [ | inputlookup test.cs... See more...
Hi @myusufe71 , let me understand: you want to filter results of the main search with the results of the subsearch, is it correct? in this case, please, try this: index=abc [ | inputlookup test.csv WHERE cluster="cluster1" | dedup host | fields host ] put only attention that the field to use as key (host) is the same in both main and sub search (it's case sensitive!). Ciao. Giuseppe
Hi @jg91 , if your csv doesn't contain any timestamp, Splunk can assign the timestamp of the index time or the timestamp from the previous event. probably it's assigned the second one. I hint to s... See more...
Hi @jg91 , if your csv doesn't contain any timestamp, Splunk can assign the timestamp of the index time or the timestamp from the previous event. probably it's assigned the second one. I hint to specify in props.conf that the timestamp is the current time: DATETIME_CONFIG = CURRENT as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Propsconf#Timestamp_extraction_configuration Ciao. Giuseppe
Hi @virgupta , Splunk is certified on every linux platform based on kernel 4.x and greater or 5.4, as you can see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/Systemrequirement... See more...
Hi @virgupta , Splunk is certified on every linux platform based on kernel 4.x and greater or 5.4, as you can see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/Systemrequirements or at https://www.splunk.com/en_us/download/splunk-enterprise.html Ciao. Giuseppe
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _i... See more...
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _internal index, and the data ends up being ingested with a timestamp from 2021. This container has not previously ingested any data, so I’m unsure why it defaults to this date. 10-18-2024 03:42:00.942 +0000 WARN DateParserVerbose [1571 structuredparsing] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Jan 13 21:06:54 2021). Context: source=/var/data/sample.csv|host=splunk-uf|csv|6215   Can someone explain why this date is being applied, and how I can prevent this issue?  
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by ... See more...
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by host | fields - count   2. main query using subquery, as index=abc host="*" host="*" is subquery result. Or is there any way to extract subquery result as host IN (host1, host2, host3) in main query?  
Great But maybe dashboard will not update variables/tokens until you manually change the picker. Let's say i choose "-5m" from picker and latest is "now" for default, it will remain fi... See more...
Great But maybe dashboard will not update variables/tokens until you manually change the picker. Let's say i choose "-5m" from picker and latest is "now" for default, it will remain fixed to relative_time(time(), $earliest$)  the UNIX-time value, also if my panels refreshes. So, letting dashboard has refreshing panels, the -5m will become -6 -7 -8 -9 -10 ......... untill you change the picker... Also for $latest$=="now", time() Same concept for earliest... it becomes fixed until you refresh entire dashboard/picker.
Hey @dstoev if CSV has a proper header and you have marked checkbox for Parse all files as CSV in the input configuration page.