All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Guys,   Could you possibly assist me with creating a rex for the log below? Rex for "CEOTransactionSessionId":"1D2667DC-7849-1122-3FE3-C4A08EAC9FEB"
I am trying to modify Thresholds based on the day and time. I have the chart completed, just need help with the thresholds. If the day is Saturday or Sunday and host is a1, Threshold should be 1, if... See more...
I am trying to modify Thresholds based on the day and time. I have the chart completed, just need help with the thresholds. If the day is Saturday or Sunday and host is a1, Threshold should be 1, if host is a2 with same timing conditions, threshold should be 1.5 and for other remaining hosts threshold should be 0.5 with same timing conditions. If the day is Monday-Friday, and time is between 12:00am to 12:pm, for host a1 threshold should be 3, and for same timing conditions for host a2 threshold should be 4 and for other remaining hosts threshold should be 1 with same timing conditions. The search that I am trying is something like as shown below, but there are multiple hosts and it is not working for a single host and I need to change thresholds based on time of that particular day as well. | eval Threshold = case (if(strftime(_time,"%a")="Sat" AND host="a1"), Threshold=1, if(strftime(_time,"%a")=Sun AND host="a1"), Threshold=1, if(strftime(_time,"%a")="Sun" AND host="a2"), Threshold=1.5, if(strftime(_time,"%a")="Sun" AND host="a2"), Threshold=1.5)
処理時間を表すグラフを作っており、Y軸を "HH:MM:SS"形式にする方法がありましたらご教示ください。
Hello everyone, I have a problem while installing an update for Splunk DB connect. I have a version 3.9 and try to update to 3.12.2. I do the updating via Web interface. After the update has been... See more...
Hello everyone, I have a problem while installing an update for Splunk DB connect. I have a version 3.9 and try to update to 3.12.2. I do the updating via Web interface. After the update has been done and Splunk was restarted, no data can be read from the database.  When I try to save an existing connection (create a new one) in the configuration, I get an error "Problem occurred while accessing keystore". I would be happy, when you advise me something. Thanks in advance.
Hi, I have registered for Splunk Phantom Community edition download 4 days ago. However, still the approval is pending and i didn't received link so far. Please let me know how can I get the link fo... See more...
Hi, I have registered for Splunk Phantom Community edition download 4 days ago. However, still the approval is pending and i didn't received link so far. Please let me know how can I get the link for download.
Hi, Which port does Splunk use for Universal Forwarder monitoring? Thanks!
I have events like so:     {"action": {"result": true, "type": "login"}, "actor": {"email": "test.email@domain.tld", "id": "0123456789abcdef0123456789abcdef", "ip": "1.2.3.4", "type": "user"}, ... See more...
I have events like so:     {"action": {"result": true, "type": "login"}, "actor": {"email": "test.email@domain.tld", "id": "0123456789abcdef0123456789abcdef", "ip": "1.2.3.4", "type": "user"}, "id": "01234567-89ab-cdef-0123-456789abcdef", "newValue": "audit", "oldValue": "review", "owner": {"id": "fedcba9876543210fedcba9876543210"}, "when": "2023-04-21T18:52:32Z", "account_name": "test_account"}     The props.conf file is as so:     [cloudflare_audit] NO_BINARY_CHECK=true INDEXED_EXTRACTIONS=JSON TIMESTAMP_FIELDS=when disabled=false pulldown_type=true     When I do this, I wind up with two records per event, split at that TIME_PREFIX setting, each record with the time found in "when". Things I've tried so far, based on the above: Adding "KV_MODE=none" -- The event is parsed as JSON, but the time is ignored Adding "TIME_PREFIX=when": "" and LINE_BREAKER=}$ -- The event is split on "when", again Removing "INDEXED_EXTRACTIONS=true" and adding "AUTO_KV_JSON=true" -- The event is parsed as JSON, but the time is ignored Two questions: How can I fix this so that it pulls in the timefield correctly, without any splitting of the JSON object? Why is it so difficult to ingest JSON logs?
Hello eveyrone, Firstly Big Thanks to @ITWhisperer for helping me in recent weeks  I have created a splunk query which will display the data as below.  Operations average response90... See more...
Hello eveyrone, Firstly Big Thanks to @ITWhisperer for helping me in recent weeks  I have created a splunk query which will display the data as below.  Operations average response90 create_cart 250 380 cart_summary 240 330 cart_productType 210 321 getCart 260 365       index=my_index openshift_cluster="cluster009" sourcetype=openshift_logs openshift_namespace=my_ns openshift_container_name=container | search ("POST /shopping/carts/v1 HTTP" OR "GET /shopping/carts/v1/*/summary HTTP" OR "GET *shopping*carts*productType* HTTP") | eval Operations=case( searchmatch("POST /shopping/carts/v1 HTTP"),"create_cart", searchmatch("GET /shopping/carts/v1/*/summary HTTP"),"cart_summary", searchmatch("GET *shopping*carts*productType* HTTP"),"cart_productType") | stats avg(processDuration) as average perc90(processDuration) as response90 by Operations | eval average=round(average,2),response90=round(response90,2)     I want to include 1 more search pattern as below:     "message":{"input":"999.111.000.999 - - [06/Apr/2023:04:08:13 +0000] \"GET /shopping/carts/v1/83h3h331-g494-28h4-yyw7-dq123123123d HTTP/1.1\" 200 1855 8080 10 ms"}     Hence I changed the splunk query something like below to display the above formatted tabular information     index=my_index openshift_cluster="cluster009" sourcetype=openshift_logs openshift_namespace=my_ns openshift_container_name=container | rex "\"(?<url>GET /shopping/carts/v1/[^/ ?]+\sHTTP)" | search ("POST /shopping/carts/v1 HTTP" OR "GET /shopping/carts/v1/*/summary HTTP" OR "GET *shopping*carts*productType* HTTP") OR url | eval Operations=case( searchmatch("POST /shopping/carts/v1 HTTP"),"create_cart", searchmatch("GET /shopping/carts/v1/*/summary HTTP"),"cart_summary", searchmatch("GET *shopping*carts*productType* HTTP"),"cart_productType", searchmatch(url),"getCart") | stats avg(processDuration) as average perc90(processDuration) as response90 by Operations | eval average=round(average,2),response90=round(response90,2)     I am encountering the error stating : Error in 'EvalCommand': The arguments to the 'searchmatch' function are invalid.  
Hi Team, We are looking for the help to understand the issue we are facing. we have multiple index in our splunk cloud env, but when ever there is the load test is happening in our QA env one of t... See more...
Hi Team, We are looking for the help to understand the issue we are facing. we have multiple index in our splunk cloud env, but when ever there is the load test is happening in our QA env one of the index logs ingestion is going very high. and same time we are noticing the all other indexes log (Prod and all) ingestion facing latency issue. logs ingestion is almost zero for all other index whenever there is a loadtest. we were thinking this may be we have only one cloud HF, so have deploy new HF and move all the QA logs to new HF. but, still its not solved. and showing that during the load test QA logs capturing all the resources and creating the delay for other index. please advice,         
So I have an application that runs as a docker container in AWS ECS Fargate, and in log configurations for the container , I have used splunk log driver , here I have used --log-opt env to let say se... See more...
So I have an application that runs as a docker container in AWS ECS Fargate, and in log configurations for the container , I have used splunk log driver , here I have used --log-opt env to let say set a variable xyz, this variable appears now in the logs under attrs.xyz but I don't want to search everytime using this , so I used field alias in the settings -> fields -> new filed aliases  and created xyz = attrs.xyz, but now I have created this field alias and I can't see it (use it to filter the search) but admin user can see this field although correct app - search was selected , and read permission to everyone was given
I need to get timecharts of more than 100 fields from an index and save them back to splunk. And I need to update these timecharts regularly (say per week). I can find a good way to do it. btw Chat... See more...
I need to get timecharts of more than 100 fields from an index and save them back to splunk. And I need to update these timecharts regularly (say per week). I can find a good way to do it. btw Chatgpt suggest data model. But I failed to ask for a more specific answer from it. Please help. Many thanks.
Hi there, actually I'm developing a dashboard for supporting Use Case testing. Amongst other fields it reads from inputlookup a field that is a 1:1 copy from a real "_raw" field of a real test even... See more...
Hi there, actually I'm developing a dashboard for supporting Use Case testing. Amongst other fields it reads from inputlookup a field that is a 1:1 copy from a real "_raw" field of a real test event. The field is copied to an input field that allows user to modify the content. Here's the code: <panel depends="$show_test_event_both$" id="id_panel_test_event_xst"> <table> <search rejects="$no_process_srch_test_event_xst$"> <query> | inputlookup secops.UC_Testing_Events.csv | search tenant = "$sel_tenant$" AND alert_name = "$sel_alert_name$" AND alert_version = "$sel_alert_version$" | rename _raw as test_event | table test_event </query> <done> <set token="no_process_srch_test_event_xst"></set> <eval token="sel_test_event">coalesce($result.test_event$, "n/a")</eval> <set token="show_test_event_both"></set> </done> <earliest>0</earliest> <latest></latest> </search> <option name="wrap">true</option> <option name="rowNumbers">true</option> </table> </panel> <panel depends="$show_test_event_both$" id="id_panel_test_event_edt"> <input type="input" token="edit_test_event" searchWhenChanged="true" id="id_input_test_event"> <label>Test Event Prototype</label> <default>$initVal_test_event|n$</default> </input> </panel> <panel depends="$show_test_event_both$" id="id_panel_test_event_opt"> <input type="radio" token="btn_event_action"> <label>Select desired action</label> <change> <condition match="$btn_event_action$==&quot;copy&quot;"> <unset token="btn_event_action"></unset> <unset token="form.btn_event_action"></unset> <set token="initVal_test_event">$sel_test_event|n$</set> </condition> The field called "_raw" in lookup is read into "sel_test_event" which is used as initialValue for the input field of the dashboard called "edit_test_event". So far, so good - in both fields the TAB characters present in the origin "_raw" field appear as they are stored in the lookup. Looks like this:   The user then starts to replace some discrete values with placeholders like this for the original "_time" field at the beginning of the row: {ts_yyyy_mm_dd_HH_MM_SS}. All existing TAB characters are still in the input field when user is done. When pressing on save button the modified "edit_test_event" field should be written to lookup again. Running that search (that replaces all discrete values with placeholders as specified by user) REPLACES all TAB characters by SPACES, leading to final result is not usable as a test event. Whatever I tried so far did not work: | eval _raw = "$edit_test_event$" or | eval _raw = $edit_test_event|s$ or | eval _raw = "$edit_test_event|n$" Has anyone of you an idea why the TAB characters are replaced and may be why even the "...|n$" (token filtering off) is not working ? As you may imagine this is an absolute show stopper as the structure of a "_raw" event may not be modified itself. Any help appreciated. Many thanks in advance, Ekke
we have a mediumish sized environment and I can't find answers anywhere. Can we put an EDR solution on a Splunk Deployment Server?
I would like to create a column that tells me the variance for the array        | makeresults | eval raw="1 session1 O1 S1 5 6 7 9# 2 session2 O2 S2 99 55 77 999# 3 session3 O1 S1 995 55 77 99... See more...
I would like to create a column that tells me the variance for the array        | makeresults | eval raw="1 session1 O1 S1 5 6 7 9# 2 session2 O2 S2 99 55 77 999# 3 session3 O1 S1 995 55 77 999# 4 session4 O1 S1 1 2 4 1#" | makemv raw delim="#" | mvexpand raw | rename raw as _raw | rex "(?<User>\S+)\s+(?<ClientSession>\S+)\s+(?<Organization>\S+)\s+(?<Section>\S+)\s+(?<downloadspeed_file1>\S+)\s+(?<downloadspeed_file2>\S+)\s+(?<downloadspeed_file3>\S+)\s+(?<downloadspeed_file4>\S+)" | eval downloadSpeedsArray=json_array(downloadspeed_file1, downloadspeed_file2, downloadspeed_file3, downloadspeed_file4) | table User ClientSession Organization Section downloadspeed_file1, downloadspeed_file2, downloadspeed_file3, downloadspeed_file4 downloadSpeedsArray variance     can you please help me how to calculate this column.  Is the variance normalized across rows?  
Extract only first occurrence between two strings in the paragraph of string in splunk index=perf-*** source=*ResponseDataErrorAnalyzer* |rex field=_raw "scriptnamestart(?<ScriptName>[\w\D]+)scrip... See more...
Extract only first occurrence between two strings in the paragraph of string in splunk index=perf-*** source=*ResponseDataErrorAnalyzer* |rex field=_raw "scriptnamestart(?<ScriptName>[\w\D]+)scriptnameend" |table ScriptName I want to capture the first occurrence an store in the  ScriptName  and display in the table data scriptnamestartreceiving_S02_sat_Getscriptnameend<someText>scriptnamestartReceiving_S02_sat_Getscriptnameend<someText>    
Hi Friends, Hope everyone doing good! My requirement: I want to send alert results from Splunk to Azure Event Hub. Could you please suggest me how to achieve this ?  We have tried with webh... See more...
Hi Friends, Hope everyone doing good! My requirement: I want to send alert results from Splunk to Azure Event Hub. Could you please suggest me how to achieve this ?  We have tried with webhook options. But it contain default fields as output and it can't be customized. Kindly share me is there any add-on to send customized fields from Splunk to Azure event hub. I want to add as Alert action. Thanks in advance. Regards, Jagadeesh
Hi, In Splunk, I have a dashboard with 2 separate searches. I need to connect both these searches such that the first search has a drilldown that, on click by the user, it runs the 2nd search. ... See more...
Hi, In Splunk, I have a dashboard with 2 separate searches. I need to connect both these searches such that the first search has a drilldown that, on click by the user, it runs the 2nd search. The first search finds the number of "dv_parent" events for last quarter and it outputs a bar chart of the number of "dv_parent" events per quarter. The 2nd search shows the individual events per "dv_parent" events for last quarter. Currently, both searches are not connected and I need to connect both with the drilldown. Here is the XML for the dashbord: <form> <label>FCR Peer Review Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timeframe"> <label></label> <default> <earliest>-7d@d</earliest> <latest>@d</latest> </default> </input> <input type="text" token="assign_tok"> <label>Name Assigned to Ticket</label> <default>*</default> <initialValue>*</initialValue> <prefix>businessemail ="</prefix> <suffix>"</suffix> </input> </fieldset> <row> <panel> <title>Number of FCR changes per quarter</title> <chart> <search> <query>index=servicenow sourcetype="snow:sc_task" dv_assignment_group="SECURITY-NETWORK-L3" description="Request for Dell firewall changes." earliest=-3mon@mon latest=@mon | stats latest(*) as * by dv_parent | eval _time = strptime(dv_sys_updated_on, "%Y-%m-%d") | eval Quarter=strftime(_time,"%Y" . "Q" . ceil((tonumber(strftime(_time,"%m"))+12)/4)) | stats count by Quarter</query> <earliest>-3m@y</earliest> <latest>now</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <table> <title>FCR Peer Review</title> <search> <query>index=servicenow sourcetype="snow:sc_task" dv_assignment_group="SECURITY-NETWORK-L3" dv_state="Closed Complete" description="Request for Dell firewall changes." | table _time, description, dv_parent, dv_state, dv_assigned_to | dedup dv_parent | eval assigned_user=round(random() % 74, 0)+1 | lookup id_lookup.csv businessemail as businessemail | lookup temp_id.csv dv_parent OUTPUT dv_assigned_to as already_assigned | eval assigned_user=coalesce(already_assigned, user)</query> <earliest>-1y@y</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> Can you please help by altering this dashboard with the necessary drilldown with the "dv_parent" fields as token for the drilldown???? Many thanks,
Hi, I currently has a barchart like this which shows the number of requests per business quarter: Here is the respective query: index=servicenow sourcetype="snow:sc_task" dv_assignment_gro... See more...
Hi, I currently has a barchart like this which shows the number of requests per business quarter: Here is the respective query: index=servicenow sourcetype="snow:sc_task" dv_assignment_group="SECURITY-NETWORK-L3" description="Request for Dell firewall changes." | stats latest(*) as * by dv_parent | eval _time = strptime(dv_sys_updated_on, "%Y-%m-%d") | eval Quarter=strftime(_time,"%Y" . "Q" . ceil((tonumber(strftime(_time,"%m"))+1)/4)) | stats count by Quarter I need to alter this query to ONLY show the previous quarter, i.e. FY23Q4, After 1 week from today, the next quarter will start, so the bar chart will change to ONLY FY24Q1. Can you please me with this updated query? Many thanks,
If there is no file update for a quite long time and later then is update in the file, then only after forwarder service restarts then it pushes the new data. Is forwarder is inactive as there was no... See more...
If there is no file update for a quite long time and later then is update in the file, then only after forwarder service restarts then it pushes the new data. Is forwarder is inactive as there was no update since.  what is default duration for forwarder being inactive? any suggestion or is it documented
I'm running Splunk Enterprise 8.1.2 & its storage engine is 'mmapv1' And I tested to migrate 'wiredTiger' ... but I'm afraid acceleration cannot work Belows are steps I've done on test env. 1. I... See more...
I'm running Splunk Enterprise 8.1.2 & its storage engine is 'mmapv1' And I tested to migrate 'wiredTiger' ... but I'm afraid acceleration cannot work Belows are steps I've done on test env. 1. I made test splunk env. - just same with my officially operating splunk system     And import some of kvstore collections into test splunk (with same collections.conf & transform.conf) [TEST.kvstore] field.date = number field.id = number field.type = string field.version = string accelerated_fields.test = {"id":-1, "date":-1}   2. On test splunk (with no changes has been made yet - mmap1), everything worked well      I got similar lookup search time than original's   3. And then I changed storage engine to 'wiredTiger'         https://docs.splunk.com/Documentation/Splunk/8.2.9/Admin/MigrateKVstore?ref=hk  This member:                    backupRestoreStatus : Ready ...                                   port : 8191                             replicaSet : DB79F8EF-3560-4A6C-B38E-FF06F1D54661                      replicationStatus : KV store captain                             standalone : 1                                 status : ready                          storageEngine : wiredTiger   4. Finally I checked lookup search time on wiredtiger engine       "But lookup search time took much more than I expected"        ㆍmmapv1 : 52 sec        ㆍwiredtiger : 90 sec   So I checked what's wrong with test splunk and I found 'no kvstore accelerations' (There it was... but disappeared after migration to wiredtiger)   Before (mmapv1) After (wiredtiger)   I even tried to import new kvstore collection but also failed (no acceleration was made)   Does wiredtiger supports kvstore acceleration? If so, which configuration should I use?