All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have 3 different sourcetype like Result , Node and error under same index. Result has id , model Node has address, id, resultid (which is key to id in result) Error  has err_msg, id, nid (whi... See more...
I have 3 different sourcetype like Result , Node and error under same index. Result has id , model Node has address, id, resultid (which is key to id in result) Error  has err_msg, id, nid (which is key to id in Node) I want to export a result with stats count of err_msg by Node and model. I tried with joins and subquery with IN operator from other query but no luck.   index= index1 sourcetype = Node [ search  index= index1 sourcetype = Error  | stats count by err_msg ] | stats count by id,err_msg
Hi, Let me start by saying that that i have a very limit knowledge about Splunk, its normally not my area of expertise. I made some performance investigations and accidently came across some inte... See more...
Hi, Let me start by saying that that i have a very limit knowledge about Splunk, its normally not my area of expertise. I made some performance investigations and accidently came across some interesting finding for Splunk. With one of the tool i'm using i could see that splunkd.exe had a very high latency towards our Splunk servers, 700ms-1000ms and more than 20% failed connections. I cant really verify those numbers, because if i do a normal ping towards the same servers, i get around 20ms, so its only splunkd.exe that have the high latency. I was wondering if anyone could point me in the right direction, where to look, to get an understanding of this "issue". outputs.conf [tcpout] defaultGroup = primary_heavy_forwarders maxQueueSize = 7MB useACK = true forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:primary_heavy_forwarders] server = NAME1:9997, NAME2:9997, NAME3.com:9997 #clientCert = $SPLUNK_HOME/etc/auth/server.pem #sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem #sslPassword = ******** #sslVerifyServerCert = true splunkd.log (part of the log file for today, from a client) 02-09-2022 10:48:20.841 +0100 INFO ApplicationLicense - app license disabled by conf setting. 02-09-2022 10:48:26.777 +0100 WARN TcpOutputProc - Cooked connection to ip=IP1:9997 timed out 02-09-2022 10:48:50.836 +0100 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-09-2022 10:48:56.568 +0100 WARN TcpOutputProc - Cooked connection to ip=IP1:9997 timed out 02-09-2022 10:49:09.291 +0100 INFO TcpOutputProc - Closing stream for idx=IP2:9997 02-09-2022 10:49:09.291 +0100 INFO TcpOutputProc - Connected to idx=IP1:9997, pset=0, reuse=0. using ACK. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Read error. En befintlig anslutning tvingades att stänga av fjärrvärddatorn. 02-09-2022 10:50:17.238 +0100 INFO TcpOutputProc - Connection to IP2:9997 closed. Read error. En befintlig anslutning tvingades att stänga av fjärrvärddatorn. 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log|host::807|splunkd|2728, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log|host::807|splunkd|2727, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log|host::807|splunkd|2721, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log|host::807|splunkd|2713, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::WinEventLog:Security|host::807|XmlWinEventLog:Security|, streamId=3264402492634740844, offset=200186306 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP1:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP1:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP2:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP2:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Applying quarantine to ip=IP2 port=9997 _numberOfFailures=2 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP3:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP3:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP1:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP1:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Applying quarantine to ip=IP1 port=9997 _numberOfFailures=2 limits.conf # [thruput] # maxKBps = 0 The only thing i tested by myself so far is to add the servers to the host file, without any success. I also noticed that from the outputs.conf its DNS names and in the log file its IP, but maybe that does not matter. Any help would be much appreciated,  Thanks in advance
I am trying to bring future data into a dashboard which include events from this week's Friday 17:00 UTC until next week same day same hour, but I don't get any valid results whatsoever, in 2 te... See more...
I am trying to bring future data into a dashboard which include events from this week's Friday 17:00 UTC until next week same day same hour, but I don't get any valid results whatsoever, in 2 test cases: I have declared 4 tokens: <eval token="earliest_default">relative_time(now() , "+1w@w5+17h")</eval> <eval token="latest_default">relative_time(now() , "+7d@d+17h")</eval> <eval token="time_from">relative_time(now(),"+1w@w+17h")</eval> <eval token="time_to">relative_time(now(),"+7d@d+17h")</eval> And queried a search which evaluates the time for a week's span: (TEST1) <search> <query>index="xxx_index" | head 1 | eval thisFriday17 =if( strftime(now(),"%w")=="5",relative_time(now() , "+1w@w+17h"), relative_time(now() , "+7d@d+17h")) | eval nextFriday17 = relative_time(thisFriday17 , "+7d@d+17h") | eval filterFrom = case( "$xxx_presetTime$"=="This Friday 17:00 UTC - Next Week Friday 17:00 UTC", thisFriday17 , "$xxx_presetTime$"=="custom" , $time_from$ ) | eval filterTo = case( "$xxx_presetTime$"=="This Friday 17:00 UTC - Next Week Friday 17:00 UTC", nextFriday17, "$xxx_presetTime$"=="custom" , $time_to$ ) | eval filterFrom_label = strftime(filterFrom,"%d-%m-%Y- %H:%M:%S") | eval filterTo_label = strftime(filterTo,"%d-%m-%Y- %H:%M:%S") | table filterFrom , filterTo,filterFrom_label,filterTo_label</query> <earliest></earliest> <latest></latest> <done> <set token="from_drill">$result.filterFrom$</set> <set token="to_drill">$result.filterTo$</set> <set token="filterFrom_label">$result.filterFrom_label$</set> <set token="filterTo_label">$result.filterTo_label$</set> </done> </search> The main issue is that no data is displayed even if it should be. Changing the span ruins the results, meaning that Splunk brings data from the LAST Friday until THIS Friday, not from THIS Friday to the upcoming one or 2 weeks Working in the Advanced Time Span filter and selecting the above throws "The earliest time is invalid" TEST2 A working code I brought up is the following, but the results captured are from the LAST Friday until THIS Friday, not from THIS Friday to the upcoming 2 weeks. Reducing the time span from the below breaks the code. This broke me too. | eval thisFriday17 =if( strftime(now(),"%w")=="5",relative_time(now() , "@w5+17h"), relative_time(now() , "+1w@w5+17h")) | eval next2Friday17 =if( strftime(now(),"%w")=="5",relative_time(now() , "@w5+14d+17h"), relative_time(now() , "+1w@w5+14d+17h")) | eval filterFrom = case( "$xxx_presetTime$"=="This Friday 17:00 UTC - Next 2 Weeks Friday 17:00 UTC", thisFriday17 , "$xxx_presetTime$"=="custom" , $time_from$ ) | eval filterTo = case( "$xxxx_presetTime$"=="This Friday 17:00 UTC - Next 2 Weeks Friday 17:00 UTC", next2Friday17, "$xxx_presetTime$"=="custom" , $time_to$ ) | eval filterFrom_label = strftime(filterFrom,"%d-%m-%Y- %H:%M:%S") | eval filterTo_label = strftime(filterTo,"%d-%m-%Y- %H:%M:%S") | table filterFrom , filterTo,filterFrom_label,filterTo_label I must mention that the user is not able to change the Preset Time Span I am forcing: <input type="dropdown" token="xxx_presetTime" searchWhenChanged="true"> <label>Preset Time Span</label> <choice value="This Friday 17:00 UTC - Next Week Friday 17:00 UTC">This Friday 17:00 UTC - Next Week Friday 17:00 UTC</choice> Hope I am being clear in exposing my issue. Thanks in advance for your help!
We have event having field "ip_client" and have lookup file i.e(F5_IPS_Exclusion.csv) having field "F5_Exclusion_IP" as mentioned below. LOOKUP |input lookup F5_IPS_Exclusion.csv F5_Exclusion_I... See more...
We have event having field "ip_client" and have lookup file i.e(F5_IPS_Exclusion.csv) having field "F5_Exclusion_IP" as mentioned below. LOOKUP |input lookup F5_IPS_Exclusion.csv F5_Exclusion_IPS 192.203.194.133 192.203.194.137 202.128.98.209 202.128.98.210 Note: lookup file contains duplicate value too. Require search query which will return events whose "ip_clent" field value doesn't match with "F5_Exclusion_IPS" field value in lookup file.
Hello, I want to calculate the days in difference like below like future days should be in positive and past days should be negative i tried eval diff=(now()-_time) and did strftime(diff,"%D") bu... See more...
Hello, I want to calculate the days in difference like below like future days should be in positive and past days should be negative i tried eval diff=(now()-_time) and did strftime(diff,"%D") but here all are in positive days. i want the past days in negative Date Difference in Days   04-02-2022 -5   05-02-2022 -4   06-02-2022 -3   07-02-2022 -2   08-02-2022 -1   09-02-2022 0 Today Date 10-02-2022 1   11-02-2022 2   12-02-2022 3   13-02-2022 4  
Point 1: I need to use the logs only specific timings to bring the output (timings like 7am to 8pm weekdays only that to date is 1st Jan to 17th Jan and 31st jan)... Point 2: We are receiving a l... See more...
Point 1: I need to use the logs only specific timings to bring the output (timings like 7am to 8pm weekdays only that to date is 1st Jan to 17th Jan and 31st jan)... Point 2: We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address. In this field ,we have mutiple IP's and event is indexing for each 5 min of interval like(Ping success for Ip_Address=10.10.101.10 OR Ping failed for Ip_Address=10.10.101.10).   FYI, if I am getting events like(1:00pm ping failed and 1:05pm ping success) in this case we are not considering as failed percentage. So, basically if count of failure is more than one time(means Continuously like 1:00pm ping failed and 1:05pm ping failed ) then only it will be considered as failure. I do not want all IP address data. Only data need certain IP Addresses are required at the following timings...we need failed and success percentage within the time to mentioned Ip's in our CSV file final output like IP_Address Failed% Success% 1.1.1.1.          0.5.          99.5
2022-02-03 12:07:12 [machine-run-00000-hit-000000-step-00000] [[Card Onboarding] CCC Capture - Logging Framework] [Card Onboarding business process v3.0.0_logging (CardOnboardingCPSCapture)] [CC00] C... See more...
2022-02-03 12:07:12 [machine-run-00000-hit-000000-step-00000] [[Card Onboarding] CCC Capture - Logging Framework] [Card Onboarding business process v3.0.0_logging (CardOnboardingCPSCapture)] [CC00] CardOnboardingCPSCaptureRobot [ERROR] Error CPS NOT AVAILABLE on CPS screen UNKNOWN   Need to extract the above highlighted fields please 2022-02-03 12:07:12 - Date [Card Onboarding] CCC Capture - Logging Framework - Process Card Onboarding business process v3.0.0_logging (CardOnboardingCPSCapture) - Step CC00 - User ERROR - Log_Level  
Hi I am trying to explore more ways to check if business email compromise is being happening in our organization, just before the end user recognises it. i have a list of domains that we usually ... See more...
Hi I am trying to explore more ways to check if business email compromise is being happening in our organization, just before the end user recognises it. i have a list of domains that we usually communicate with,  there are around 490 domains I have listed and added to a csv file.  there is an index which is updated in realtime which have logs from mimecast. I would like to list out domains which are trying to establish email communication with our organization which are not there in the csv file. so if a non matching domain is emailing us, it should display in a dashboard. is this possible?
Dear All I agree that this may not be the right forum to post this. There are a lot of authentication failures for some accounts and the sources are two Linux servers. Checked with the user, they... See more...
Dear All I agree that this may not be the right forum to post this. There are a lot of authentication failures for some accounts and the sources are two Linux servers. Checked with the user, they didn't enter incorrect credentials these many times. For sure, this is some process or job. However, I would like to understand why are these attempts failing. And if these are counted as failed attempts, why these attempts don't lock out the account (considering we have an account lock-out policy) Can someone help me to understand how are these attempts generated?
How to eliminate duplicate rows before transaction command. Because of which I am getting wrong calculation. eg scenario: calculating downtime based on events Query is    index="wineven... See more...
How to eliminate duplicate rows before transaction command. Because of which I am getting wrong calculation. eg scenario: calculating downtime based on events Query is    index="winevent" host IN (abc) EventCode=6006 OR EventCode="6005" Type=Information | eval BootUptime = if(EventCode=6005,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | eval stoptime = if(EventCode=6006,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | transaction host startswith=6006 endswith=6005 maxevents=2 | eval duration=tostring(duration,"duration") | eval time_taken = replace(duration,"(\d+)\:(\d+)\:(\d+)","\1h \2min \3sec") | rename time_taken AS Downtime | dedup Downtime, BootUptime | table host,stoptime, BootUptime, Downtime   Result is ::   host stoptime bootuptime Downtime abc 2022-30-01 10:39:25 2022-30-01 10:40:29 00h 01min 04sec abc 2022-09-01 09:27:53 2022-09-01 09:28:34 00h 00min 41sec abc 2021-28-11 10:52:52 2022-09-01 09:28:34 41d 22h 35min 42sec   in the result since i have duplicate in bootuptime the dowtime calculation is incorrect. How to get rid of this? Thanks in Advance
The original problem I am trying to fix is that I am not able to tag single events since they dont have a small enough field to use for the tags(only unique field was over 1024 chars). The solution f... See more...
The original problem I am trying to fix is that I am not able to tag single events since they dont have a small enough field to use for the tags(only unique field was over 1024 chars). The solution for this was to create on the sourcetype we care about a field that would generate sha256 values making a unique field. What i have added in the local diretory of the TA for the sourcetype: transforms.conf [add_event_hash] INGEST_EVAL = event_hash=sha256(_raw) FORMAT = event_hash::$1 WRITE_META = true props.conf [thor] TRANSFORM-event_hash = add_event_hash and fields.conf [event_hash] INDEXED = true The result after restarting Splunk and re-importing the data is that the field is successfully created with the value we want, yet the field value is not searchable. The search generates 0 results when searching for event_hash=<hash> but only generates the correct result when using event_hash=*<hash>* any assistance would be much appreciated  
I have multiple pie charts, each showing data from a different cluster. I would like to define one generic datasource that gets the cluster name as an input.  Is there a possibility to define/set v... See more...
I have multiple pie charts, each showing data from a different cluster. I would like to define one generic datasource that gets the cluster name as an input.  Is there a possibility to define/set variables within a visualization block and pass it to a datasource?    { "visualizations": { "viz_pie_chart_cluster1": { "type": "viz.pie", "dataSources": { "primary": "ds1" }, "title": "Cluster 1", "options": { "chart.showPercent": true, } # I want to pass the cluster_name=cluster1 from this vizualization }, "viz_pie_chart_cluster2": { "type": "viz.pie", "dataSources": { "primary": "ds1" }, "title": "Cluster 2", "options": { "chart.showPercent": true, } # I want to pass the cluster_name=cluster2 from this vizualization } }, "dataSources": { "ds1": { "type": "ds.search", "options": { "query": "... cluster_name=$cluster_name$ ..." }, "name": "ds1" } } }  
Hi I am trying to use Regex with the Field Extractor to extract the value of a particular field in a given piece of text, but am having a problem with the regex. The text is in the format " text ... See more...
Hi I am trying to use Regex with the Field Extractor to extract the value of a particular field in a given piece of text, but am having a problem with the regex. The text is in the format " text | message: value | more text ". So basically i need to extract the value of the field 'message' , and put it into a field named raw_message. The value of the message field can be any string. Each field/value pair in the text is separated by a pipe character, as can be seen below. I want to just extract the value of the 'message' field. All other text can be ignored. The ":" character that proceeds the field name can be ignored also. Sample text below:         | source: 10.2.2.134 | message: P-235332 | host: clmm0011.syn.local         So Regex needs to extract "P-235332" into a new field named raw_message. Can somebody help me with a Regex that would work with this? Thanks.
Hello, Our Customer lost access to support but we need to open ticket. We have name of Customer, invoice etc.... How to retreive account/password or simply transfer supported instances into my acc... See more...
Hello, Our Customer lost access to support but we need to open ticket. We have name of Customer, invoice etc.... How to retreive account/password or simply transfer supported instances into my account? Regards
I try to group by 2 fields: policy_id and client_rol but "| stats values(*) by policy_id client_rol " then the rest of fields´ values are missing. Having following table ... policy_id client_rol cl... See more...
I try to group by 2 fields: policy_id and client_rol but "| stats values(*) by policy_id client_rol " then the rest of fields´ values are missing. Having following table ... policy_id client_rol client_id client_city  001   TO  X0001   LONDON  001   AS  X0001    001   TO  X0001  LONDON  001   AS  X0001     The result I would like to get is: policy_id client_rol client_id client_city  001    TO    X0001   LONDON  001  AS  X0001      any clue guys?
I was trying to get the latest time from index=index1 sourcetype=source1 Below is the string: | tstats latest(_time) as lastTime where index=index1 sourcetype=source1 | eval lastTime=strftime(la... See more...
I was trying to get the latest time from index=index1 sourcetype=source1 Below is the string: | tstats latest(_time) as lastTime where index=index1 sourcetype=source1 | eval lastTime=strftime(lastTime,"%x %X") I will use this lastTime as the time-picker to have the table display for data inside source1. The purpose is to always get the data from last time query log in source1. Could anybody to tell me how to continue the string to make it?
Is it possible to revert the KV store storage engine migration in a standalone environment with SE 8.x.  Example: If I am migrating the KV store storage engine from MMAP to WiredTiger. Can I reve... See more...
Is it possible to revert the KV store storage engine migration in a standalone environment with SE 8.x.  Example: If I am migrating the KV store storage engine from MMAP to WiredTiger. Can I revert this change i.e. migrate from WireTiger to MMAP.   If it is possible what are the steps to do so. Is there any doc for this? I can see doc/command for migrating from MMAP to WiredTiger  splunk migrate kvstore-storage-engine --target-engine wiredTiger Need similar steps for the reverse condition. Please help.
Hello, Please I need your help,  I have a dedup with a conditional. It happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its ... See more...
Hello, Please I need your help,  I have a dedup with a conditional. It happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its previous value and the new change. I need to delete the repeated rows and only keep the values that have a reason written by the technician.  
Hello,  I need your help please, it happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its previous value and the new change. I ... See more...
Hello,  I need your help please, it happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its previous value and the new change. I need to delete the repeated rows and only keep the values that have a reason written by the technician.    
Hi, all! I have one existing field which is CHECKPOINT_ID from my table 1 and another csv file which contains an interpretation of CHECKPOINT_ID.  I want to add a new column of GIVR_CALLFLOW_DEFINE... See more...
Hi, all! I have one existing field which is CHECKPOINT_ID from my table 1 and another csv file which contains an interpretation of CHECKPOINT_ID.  I want to add a new column of GIVR_CALLFLOW_DEFINED_CHKPNT to my table 1 by using lookup!   Here is the table 1 Here's the csv file: