All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status | eventstats sum(count) as total | eval percent=100*count/total | eval percent=ro... See more...
index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status | eventstats sum(count) as total | eval percent=100*count/total | eval percent=round(percent,2) | eval SLO =if( status="200","99,9%","0,1%") | where NOT (date_wday=="saturday" AND date_hour >= 8 AND date_hour < 11) | fields - total count   I have the above Query and the above result , how can i combine 502 and 200 results to show our availability excluding maintenance time of 8pm to 10pm every Saturday, how can i make it look like the drawing I produced there
Thanks, I think https://docs.splunk.com/Documentation/Splunk/9.2.1/InheritedDeployment/Ports is the one included?
Hi @irisk , did you tried to use INDEXED_EXTRACTIONS = json in your sourcetype? Otherwise, did you already tried with spath command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReferen... See more...
Hi @irisk , did you tried to use INDEXED_EXTRACTIONS = json in your sourcetype? Otherwise, did you already tried with spath command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath )? Ciao. Giuseppe
Hello,  I receive an event of the following format: { log: { 'trace_id': 'abc', 'request_time': '2024-06-04 10:49:56.470140', 'log_type': 'DEBUG', 'message': 'hello'} } Is it possible to extract f... See more...
Hello,  I receive an event of the following format: { log: { 'trace_id': 'abc', 'request_time': '2024-06-04 10:49:56.470140', 'log_type': 'DEBUG', 'message': 'hello'} } Is it possible to extract from all the events I receive the inner JSON? * each key in the inner json will be a column value but the me
After few tries, i changed test case and saw the problem was that is was asking splunk to save an event "in the future", and apparently that's not possibile
SHC - Search Head Cluster. You can use openssl to look validity of cert. There are lot of examples on net, how this can do.
What ever you have mentioned thats correct, only for one log we are facing this issue, for others show source is loading fine. still getting after truncating : Failed to find target event in final ... See more...
What ever you have mentioned thats correct, only for one log we are facing this issue, for others show source is loading fine. still getting after truncating : Failed to find target event in final sorted event list. Cannot properly prune results  
Unfortunately, as you're introducing an additional external component (cribl worker), it's hard to say what happens where. BTW, it's probably not that cribl merges anything, more like it doesn't spl... See more...
Unfortunately, as you're introducing an additional external component (cribl worker), it's hard to say what happens where. BTW, it's probably not that cribl merges anything, more like it doesn't split the events properly since UF sends data in chunks, not single events. So the one at fault here is most probably the cribl one. BTW, why don't you just send UF->Indexer (or UF->HF->Indexer)?
is anybody can help please?
I have a small query that splits events depending on a multivalue field and each of n's date from the multivalue needs to become the _time of n's "collected" row.   index=test source=test | eval fo... See more...
I have a small query that splits events depending on a multivalue field and each of n's date from the multivalue needs to become the _time of n's "collected" row.   index=test source=test | eval fooDates=coalesce(fooDates, foo2), fooTrip=mvsort(mvdedup(split(fooDates, ", "))), fooCount=mvcount(fooTrip), fooValue=fooValue/fooCount | mvexpand fooTrip | fields - _raw | eval _time=strptime(fooTrip, "%F") | table _time VARIOUS FIELDS | collect index=test source="fooTest" addtime=true   The ouput table view is exactly what i'm expecting, but when i search for these fields on new source, they have today time (or, with addtime=false, earliest from the time picker). Also using testmode=true, i still see results as supposed to be. What's wrong? Thanks 
Thank you for your answer deepakc, but that is not correct. I do not want to have a simple KPI Dashboard. Each detailed (sub) dashboard, has custom query's which I don't want to run automatically t... See more...
Thank you for your answer deepakc, but that is not correct. I do not want to have a simple KPI Dashboard. Each detailed (sub) dashboard, has custom query's which I don't want to run automatically twice, once in the detailed board and once on the summary board. Maybe an simple example makes my question more clear: App1-Dashboard: - 10 different custom query's which will show 10 different traffic light style of indication App2-Dashboard: - 50 different custom query's which will show 50 different traffic light style of indication App3-Dashboard: - 15 different custom query's which will show 15 different traffic light style of indication The logs are not simply evaluated based on log-level, rather based on specific string combinations. Instead of looking to each of my three dashboards one by one, I would like to have a "Summary Dashboard" which only includes three traffic lights. One for each mentioned app above. If e.g. App2-Dashboard has one of 50 traffic light warnings, I would like to see the traffic light of App2 in my "Summary Dashboard" indicate yellow or red to make sure I'm aware of any problem in App2. I do not want to have all custom query's run in the Summary Dashboard and on each App Dashboard. 
Sorry, I've opened a new post about my problem. I think that I have given some wrong information here, which I have noticed in the meantime. https://community.splunk.com/t5/Getting-Data-In/Collect-j... See more...
Sorry, I've opened a new post about my problem. I think that I have given some wrong information here, which I have noticed in the meantime. https://community.splunk.com/t5/Getting-Data-In/Collect-journalctl-events-with-a-Splunk-UF-to-Cribl-Stream-in/m-p/689510#M114765  
Hello, Here I have a small picture of how the environment is structured: Red arrow -> Source Splunk TCP (Cribl Stream)   I'm trying to forward the journald data from the Splunk Universal Forw... See more...
Hello, Here I have a small picture of how the environment is structured: Red arrow -> Source Splunk TCP (Cribl Stream)   I'm trying to forward the journald data from the Splunk Universal Forwarder to the Cribl Worker (Black to blue box). I have configured the forwarding of the journald data using the instructions from Splunk. (Get data with the Journald input - Splunk Documentation)   I can forward the journald data and it also arrives at the cribl worker. Problem: the cribl worker cannot distinguish the individual events from the journald data or does not know when a single event is over and thus combines several individual events into one large one. The Cribl Worker always merges about 5-8 journald events. (I have marked the individual events here. However, they arrive as such a block, sometimes more together, sometimes less.) Event 1: Invalid user test from 111.222.333.444port 1111pam_unix(sshd:auth):check pass; userunknownpam_unix(sshd:auth):authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.222.333.444Failed password forinvalid user testfrom 111.222.333.444 port1111 ssh2error: Received disconnect from 111.222.333.444port 1111:13: Unableto authenticate [preauth]Disconnected from invaliduser test 111.222.333.444port 1111 [preauth]   What I tested: If I have the journald data from the universal forwarder not forwarded via a cribl worker, but via a heavy forwarder (The blue box in the picture above is then no longer a Cribl Worker but a Splunk Heavy Forwarder), then the events are individual and easy to read. Like this: Event 1:   Invalid user testfrom 111.222.333.444 port1111   Event 2:   pam_unix(sshd:auth):check pass; userunknown   Event 3:   pam_unix(sshd:auth):authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.222.333.444   Event 4:   Failed password forinvalid user testfrom 111.222.333.444 port1111 ssh2   Event 5:   error: Received disconnectfrom 111.222.333.444 port1111:13: Unable toauthenticate [preauth]   Event 6:   Disconnected from invaliduser test 111.222.333.444port 1111 [preauth]   -------------------------------- I'm looking for a solution that I can send the journald data as shown in the figure above, but the journald data will be sent as in the second case. Thanks in advance for your help.
Extending @ITWhisperer 's answer - unless you have a third-party solution (some form of asset inventory software or even your own scripted input listing installed software), Splunk on its own cannot ... See more...
Extending @ITWhisperer 's answer - unless you have a third-party solution (some form of asset inventory software or even your own scripted input listing installed software), Splunk on its own cannot tell you since it only works on the data you give it. So by default you can only pull what your Windows machine produces (event logs, maybe some log files). So if you can find this info in what Windows report on its own - good, you can use it. But I don't recall that it does.
Having said that - as with most of the questions starting with "how to find all" - it's possible to do it only for a specific subset of cases. There are ways of creating the searches so that you won'... See more...
Having said that - as with most of the questions starting with "how to find all" - it's possible to do it only for a specific subset of cases. There are ways of creating the searches so that you won't know what they're using effectively for searching (aliases, eventtypes, tags, subsearches, lookups...).
This is obviously a mistake on the docs page (unfortunately dev docs don't include the feedback form). How would you write JS code with Python SDK? It makes no sense.
There's a portal for such feature requests - https://ideas.splunk.com/  
Where did you put your props.conf? (on which component) And what does your ingest process look like? Because that's apparently not data from a windows eventlog input.
What are you using for authentication? If you are using external authentication source (like LDAP or SAML) your users will get re-created as soon as they authenticate using that source.