All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have to look up this command every few months because I can never remember it... Are you talking about the 'scrub' command? Turns your search results from email= thisemail@gmail.com  > email= fj... See more...
I have to look up this command every few months because I can never remember it... Are you talking about the 'scrub' command? Turns your search results from email= thisemail@gmail.com  > email= fjnwspfvj@gmail.com or possibly to > email= dspehbpwn@smrls.dpo     It keeps the data in the same format just jumbles everything up? https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/SearchReference/Scrub https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Scrub
Hi @jcorcorans , during ingestion, Splunk recognize the epochtime and uses it as timestamp, so you can use _time field to have the timestamp readable. It isn't a good practice to convert it before ... See more...
Hi @jcorcorans , during ingestion, Splunk recognize the epochtime and uses it as timestamp, so you can use _time field to have the timestamp readable. It isn't a good practice to convert it before indexing, and anyway, you can also create an additional field at search time. Ciao. Giuseppe
Thank you  yuanliu  It is working   
Putting the queries together is pretty simple, but getting a usable graph from the result is another matter. | tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime sum(attribut... See more...
Putting the queries together is pretty simple, but getting a usable graph from the result is another matter. | tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime sum(attributes.latencyTime) as TotalatcyTime where index=app-index NOT attributes.uriPath("/", null, "/provider") | eval TotResTime=TotalResponseTime/Requests, TotlatencyTime=TotalatcyTime/Requests | fields TotResTime TotlatencyTime This will produce two single-value fields, which isn't enough for an area chart.  What is it you want to show in the chart?
This seems the wrong way around - it is like saying I have a tool, now what problem can I solve with it? The question should be, I have a problem, what is the best tool to solve my problem?
Which of the 4 suggestions did you try?  Did none of them help? It would help to have the event in text rather than as an image since it's impossible to put an image in regex101.com for testing.  Tr... See more...
Which of the 4 suggestions did you try?  Did none of them help? It would help to have the event in text rather than as an image since it's impossible to put an image in regex101.com for testing.  Try this untested prop [mysourcetype] SEDCMD-noContext = s/Context Information:.*/Context Information:/g
Try using numeric values for your x and y axis | eval start_time_bucket = 5 * floor(start_time/5) or | bin start_time as start_time_bucket span=5
linux logs only showing epoch time - how to convert epoch time upon ingestion in props/trans ? is there a way or a conversion to convert the epoch time to human readable upon log ingestion?
I tried to make "bubble chart", and information about this chart is this. - x axis : test start time - y axis : test duration time - bubble size : count depend on "x axis" & "y axis" And this is ... See more...
I tried to make "bubble chart", and information about this chart is this. - x axis : test start time - y axis : test duration time - bubble size : count depend on "x axis" & "y axis" And this is my code. | eval start_time_bucket = case( start_time >= 0 AND start_time < 5, "0~5", start_time >= 5 AND start_time < 10, "5~10", start_time >= 10 AND start_time < 15, "10~15", start_time >= 15 AND start_time < 20, "15~20", true(), "20~") | eval duration_bucket=case( duration>=0 AND duration < 0.5, "0~0.5", duration>=0.5 AND duration < 1, "0.5 ~ 1", duration>=1 AND duration < 1.5, "1 ~ 1.5", duration>=1.5 AND duration < 2, "1.5 ~ 2", duration>=2 AND duration < 2.5, "2 ~ 2.5", true(), "2.5 ~" ) | stats count by start_time_bucket, duration_bucket | eval bubble_size = count | table start_time_bucket, duration_bucket, bubble_size | rename start_time_bucket as "Test Start time" duration_bucket as "duration" bubble_size as "Count"  So when the start_time is 12, and duration is 2, this data counted on bubble size at start_time_bucket = "10~15" and duration_bucket ="2~2.5". I have a lot of data on each x & y axis, but It only show the bubble when the start_time_bucket = "0~5" and duration_bucket="0~0.5" like under the picture.   How could I solve this problem? when I show this data on table, it shows very well.
Hello, We are facing the same issue. How did you identify the wallclock message ? is it possible that for us is a different one? I tried your solution after correcting the python    if 'took wal... See more...
Hello, We are facing the same issue. How did you identify the wallclock message ? is it possible that for us is a different one? I tried your solution after correcting the python    if 'took wallclock_ms' not in err:    but hasn't worked for us. Thanks, José
your suggestion didn't help unfortunately , this is an example for a log, I need to cut all the data after "Context Information" (include) attachment is added.    
How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configurati... See more...
How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configuration: [ETW_SILK_JSON] description = silk etw LINE_BREAKER = ([\r\n]+"event":) SHOULD_LINEMERGE = false CHARSET = UTF-8 TRUNCATE = 0 # TRANSFORMS-cleanjson = strip_event_prefix SEDCMD-strip_event = s/^"event":\{\s*// And my message sample: "event":{{"ProviderGuid":"7dd42a49-5329-4832-8dfd-43d979153a88","YaraMatch":[],"ProviderName":"Microsoft-Windows-Kernel-Network","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","Opcode":11,"OpcodeName":"Datareceived.","TimeStamp":"2024-07-22T14:29:27.6882177+03:00","ThreadID":10008,"ProcessID":1224,"ProcessName":"svchost","PointerSize":8,"EventDataLength":28,"XmlEventData":{"FormattedMessage":"TCPv4: 43 bytes received from 1,721,149,632:15,629 to -23,680,832:14,326. ","connid":"0","sport":"15,629","_PID":"820","seqnum":"0","MSec":"339.9806","saddr":"1,721,149,632","size":"43","PID":"1224","dport":"14,326","TID":"10008","ProviderName":"Microsoft-Windows-Kernel-Network","PName":"","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","daddr":"-23,680,832"}}} I want to get rid of the "event" prefix but none of the optios seems to work.
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorizati... See more...
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorization.conf works and the user is showing up correctly:   [userToRoleMap_ProxySSO] myuser = myrole-1;myrole-2::test::mymail@test.com     Unfortunately I didn't find any way to populate this information from the ProxySSO information like i did for RemoteGroup and RemoteUser.   Kind Regards
Anyone have an idea how to do this? For testing purposes of course.
5 columns and 79 rows
I'd like to know what are the usecases applied on splunk enterprise
How large is your csv?
Hello. Thank you for all your help and support. In a registered lookup table file (CSV), if I want to search and match the value of a specific field from two columns (two columns), how should I set... See more...
Hello. Thank you for all your help and support. In a registered lookup table file (CSV), if I want to search and match the value of a specific field from two columns (two columns), how should I set the input fields in the automatic lookup setup screen? For example, I have the following columns in my table file PC_Name,MacAddress1,MacAddress2 The MacAddress in the Splunk index log resides in either MacAddress1 or MacAddress2 in the table file. Therefore, we want to search both columns and return the PC_Name of the matching record. As a test, I tried to set the following two input fields to be searched automatically from the Lookup settings screen of the GUI, but PC_Name did not appear in the search result field. *See attached image. *If the following input field setting is one, PC_Name is output. MACAddr1 = Mac address MACAddr2 = Mac address So, as a workaround, I split the lookup settings into two and set each as follows MACAddr1 = MacAddress and MACAddr2 = MacAddress in the input fields to display the search results. However, this is not smart. Note that the lookup is configured from the Splunk Web UI. What is the best way to configure this?
I have a dashboard which gives the below error at user end but when i open the dashboard i dont see any error at my end and it perfectly runs fine with the proper result Error in 'lookup' comma... See more...
I have a dashboard which gives the below error at user end but when i open the dashboard i dont see any error at my end and it perfectly runs fine with the proper result Error in 'lookup' command: Could not construct lookup 'EventCodes, EventCode, LogName, OUTPUTNEW, desc'. See search.log for more details. Eventtype 'msad-rep-errors' does not exist or is disabled. Please help me how to fix this issue.  
Hi Manall,   I want to use NAS as read storage i.e as cold not hot. BTW its work fine with me till now