All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, While parsing the logs, I'm trying to extract fields, but at some point, I receive the following message "The extraction failed. If you are extracting multiple fields, try removing one or m... See more...
Hello, While parsing the logs, I'm trying to extract fields, but at some point, I receive the following message "The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings."  Even when I try to highlight the fields that it fails to extract, I get the same message. Could this issue be related to the configuration file "limits.conf"  
we found something else that helped us. but thanks for your help!
@richgalloway , I want to show the total data coming from each query by _time in area chart. For example: When we run 1st query i will get output as 100.0789, I want to show this same output as _ti... See more...
@richgalloway , I want to show the total data coming from each query by _time in area chart. For example: When we run 1st query i will get output as 100.0789, I want to show this same output as _time in area chart. I mean to say i want to split this 100.0789 by _time and shown it in area graph.
Thanks for your responses! I do not have a local firewall and SELinux is disabled. I do see "Socket error communicating with splunkd" messages in a couple of logs. I am not sure how to interpret that... See more...
Thanks for your responses! I do not have a local firewall and SELinux is disabled. I do see "Socket error communicating with splunkd" messages in a couple of logs. I am not sure how to interpret that. Also, there is nothing of relevance in "/opt/splunk/var/log/splunk/first_install.log"
Worked perfectly.  Thank you for posting.
Thank you. The quotes made all the difference, silly mistake. 
Hello, Just have a question, about a test that I have in progress. I have one indexer cluster, with two servers. I decided to move, the colddb path, on a san share, mounted on both indexers. ... See more...
Hello, Just have a question, about a test that I have in progress. I have one indexer cluster, with two servers. I decided to move, the colddb path, on a san share, mounted on both indexers. For on test index, I copy all the colddb buckest into the new space on the SAN, then, I change the configuration of my test index, push it on both indexers with the master node, and all seems to be OK. My question is about, how will splunk manage the colddb, it both indexers are pointing on the same share ? There is something that I'm not sure, splunk indexer will manage properly the fact, if indexer1 already save the bucket in the new colddb path ? or I will have buckets in double. Thanks for your clarifications
I have to look up this command every few months because I can never remember it... Are you talking about the 'scrub' command? Turns your search results from email= thisemail@gmail.com  > email= fj... See more...
I have to look up this command every few months because I can never remember it... Are you talking about the 'scrub' command? Turns your search results from email= thisemail@gmail.com  > email= fjnwspfvj@gmail.com or possibly to > email= dspehbpwn@smrls.dpo     It keeps the data in the same format just jumbles everything up? https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/SearchReference/Scrub https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Scrub
Hi @jcorcorans , during ingestion, Splunk recognize the epochtime and uses it as timestamp, so you can use _time field to have the timestamp readable. It isn't a good practice to convert it before ... See more...
Hi @jcorcorans , during ingestion, Splunk recognize the epochtime and uses it as timestamp, so you can use _time field to have the timestamp readable. It isn't a good practice to convert it before indexing, and anyway, you can also create an additional field at search time. Ciao. Giuseppe
Thank you  yuanliu  It is working   
Putting the queries together is pretty simple, but getting a usable graph from the result is another matter. | tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime sum(attribut... See more...
Putting the queries together is pretty simple, but getting a usable graph from the result is another matter. | tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime sum(attributes.latencyTime) as TotalatcyTime where index=app-index NOT attributes.uriPath("/", null, "/provider") | eval TotResTime=TotalResponseTime/Requests, TotlatencyTime=TotalatcyTime/Requests | fields TotResTime TotlatencyTime This will produce two single-value fields, which isn't enough for an area chart.  What is it you want to show in the chart?
This seems the wrong way around - it is like saying I have a tool, now what problem can I solve with it? The question should be, I have a problem, what is the best tool to solve my problem?
Which of the 4 suggestions did you try?  Did none of them help? It would help to have the event in text rather than as an image since it's impossible to put an image in regex101.com for testing.  Tr... See more...
Which of the 4 suggestions did you try?  Did none of them help? It would help to have the event in text rather than as an image since it's impossible to put an image in regex101.com for testing.  Try this untested prop [mysourcetype] SEDCMD-noContext = s/Context Information:.*/Context Information:/g
Try using numeric values for your x and y axis | eval start_time_bucket = 5 * floor(start_time/5) or | bin start_time as start_time_bucket span=5
linux logs only showing epoch time - how to convert epoch time upon ingestion in props/trans ? is there a way or a conversion to convert the epoch time to human readable upon log ingestion?
I tried to make "bubble chart", and information about this chart is this. - x axis : test start time - y axis : test duration time - bubble size : count depend on "x axis" & "y axis" And this is ... See more...
I tried to make "bubble chart", and information about this chart is this. - x axis : test start time - y axis : test duration time - bubble size : count depend on "x axis" & "y axis" And this is my code. | eval start_time_bucket = case( start_time >= 0 AND start_time < 5, "0~5", start_time >= 5 AND start_time < 10, "5~10", start_time >= 10 AND start_time < 15, "10~15", start_time >= 15 AND start_time < 20, "15~20", true(), "20~") | eval duration_bucket=case( duration>=0 AND duration < 0.5, "0~0.5", duration>=0.5 AND duration < 1, "0.5 ~ 1", duration>=1 AND duration < 1.5, "1 ~ 1.5", duration>=1.5 AND duration < 2, "1.5 ~ 2", duration>=2 AND duration < 2.5, "2 ~ 2.5", true(), "2.5 ~" ) | stats count by start_time_bucket, duration_bucket | eval bubble_size = count | table start_time_bucket, duration_bucket, bubble_size | rename start_time_bucket as "Test Start time" duration_bucket as "duration" bubble_size as "Count"  So when the start_time is 12, and duration is 2, this data counted on bubble size at start_time_bucket = "10~15" and duration_bucket ="2~2.5". I have a lot of data on each x & y axis, but It only show the bubble when the start_time_bucket = "0~5" and duration_bucket="0~0.5" like under the picture.   How could I solve this problem? when I show this data on table, it shows very well.
Hello, We are facing the same issue. How did you identify the wallclock message ? is it possible that for us is a different one? I tried your solution after correcting the python    if 'took wal... See more...
Hello, We are facing the same issue. How did you identify the wallclock message ? is it possible that for us is a different one? I tried your solution after correcting the python    if 'took wallclock_ms' not in err:    but hasn't worked for us. Thanks, José
your suggestion didn't help unfortunately , this is an example for a log, I need to cut all the data after "Context Information" (include) attachment is added.    
How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configurati... See more...
How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configuration: [ETW_SILK_JSON] description = silk etw LINE_BREAKER = ([\r\n]+"event":) SHOULD_LINEMERGE = false CHARSET = UTF-8 TRUNCATE = 0 # TRANSFORMS-cleanjson = strip_event_prefix SEDCMD-strip_event = s/^"event":\{\s*// And my message sample: "event":{{"ProviderGuid":"7dd42a49-5329-4832-8dfd-43d979153a88","YaraMatch":[],"ProviderName":"Microsoft-Windows-Kernel-Network","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","Opcode":11,"OpcodeName":"Datareceived.","TimeStamp":"2024-07-22T14:29:27.6882177+03:00","ThreadID":10008,"ProcessID":1224,"ProcessName":"svchost","PointerSize":8,"EventDataLength":28,"XmlEventData":{"FormattedMessage":"TCPv4: 43 bytes received from 1,721,149,632:15,629 to -23,680,832:14,326. ","connid":"0","sport":"15,629","_PID":"820","seqnum":"0","MSec":"339.9806","saddr":"1,721,149,632","size":"43","PID":"1224","dport":"14,326","TID":"10008","ProviderName":"Microsoft-Windows-Kernel-Network","PName":"","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","daddr":"-23,680,832"}}} I want to get rid of the "event" prefix but none of the optios seems to work.
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorizati... See more...
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorization.conf works and the user is showing up correctly:   [userToRoleMap_ProxySSO] myuser = myrole-1;myrole-2::test::mymail@test.com     Unfortunately I didn't find any way to populate this information from the ProxySSO information like i did for RemoteGroup and RemoteUser.   Kind Regards