All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Why would I get "searches Delayed" under health check on one SH not other SHs in a clustered environment? Shouldn't all Search heads have the same issue of " delayed searches"? 
How can we automatically send frozen/archived splunk logs from the indexers over to a Ceph S3 bucket using the indexers.conf file on the indexers? 
I'm sending data from Azure SQL via event hub.   Been using the MS add on for splunk, which as been working pretty well, but as its EOL, trying the Splunk Add-on for Microsoft Cloud Services.   First... See more...
I'm sending data from Azure SQL via event hub.   Been using the MS add on for splunk, which as been working pretty well, but as its EOL, trying the Splunk Add-on for Microsoft Cloud Services.   First thing i noticed is how different the logs are stored. MS Add-on json is clear. properties.server_principal_name,  properties.statement Splunk add on for MS cloud services: 2 -4 records for each event.   Takes 20=30 seconds to render in a search (index=sql). records{}.properties.server_principal_name, records{}.properties.statement.  each one will have 2-4 values in it (SQLUSER, WEBUSER, OPSUSER).   Strange thing is there will be 2-4 statments or other fields (records{}.properties.succeeded (true,true, true,true).   wHy 3 users and 4 success? I'm trying to query this thing to get certain traffic such as records{}.properties.server_principal_name="webuser" | table records{}.properties.statement and all records returned but the statements returned are multiple, or simply not statements from WEBUSER.   My source is correct for audit logs mcsc:azure:eventhub Is this the way is supposed to act and if so, can i get any pointers on how to spath query this thing working given if i wanted only statements from WEBUSER and that could be the 0,1,2,3 element in a nest on each event?  
I noticed that when pushing configuration changed from the deployment-servers to addition compoents (U-forwarders, H-forwarders, indexers, search heads) via the "splunk reload deploy-server" command ... See more...
I noticed that when pushing configuration changed from the deployment-servers to addition compoents (U-forwarders, H-forwarders, indexers, search heads) via the "splunk reload deploy-server" command that there were small 20-25 sec periods of missing data ingest. I was under the impression that after these configuration pushes that splunk would catch-up or re-index  the data during the change. It appears that this is not the case after having to explain some missing data gaps.  How can i push configuration changes without causing a loss of data ingest (working with busy access_combined/apache access logs so a few hundred events are missed during a 20 second window)  Additionally, i have the full log files, is there a way to only re-index non-repeat events, i feel trying to delete the existing logs via splunk search and re-indexing with a "oneshot" method would be very time consuming over many servers.
Hello! I am having trouble with a query where I want the results to depend on the time results of another query. This first large query shows the total duration (reserving and occupying) that a user... See more...
Hello! I am having trouble with a query where I want the results to depend on the time results of another query. This first large query shows the total duration (reserving and occupying) that a user is/was in the current room and last room.   index=INDEX host=HOSTNAME sourcetype=FIRST_SOURCETYPE | rex field=_raw "User:\s(?<user_id>\d+)\s\(LeaveRoom\):\s(?<room_id>\d+)" | rex field=_raw "User:\s(?<user_id>\d+)\sChooseRoom\s(?<room_id>\d+)" | eval action=if(like(_raw, "%LeaveRoom%"), "LeaveRoom", (if(like(_raw, "%ChooseRoom%"), "ChooseRoom", null))) | where isnotnull(action) | eval late=if(action="LeaveRoom", _time, null) | streamstats latest(late) as _time earliest(_time) as early_time values(action) as actions by room_id user_id | where mvcount(actions)=2 | stats latest(room_id) as last_room_id latest(room) as last_room latest(_time) as left_room_time latest(early_time) as chosen_room_time by user_id | eval last_occupation_duration=tostring((left_room_time - chosen_room_time), "duration") | eval last_chosen_time=strftime(chosen_room_time, "%Y-%m-%d %H:%M:%S") | eval last_left_time=strftime(left_room_time, "%Y-%m-%d %H:%M:%S") | fields - left_room_time, chosen_room_time | join user_id [search index=INDEX host=HOSTNAME sourcetype=FIRST_SOURCETYPE  | rex field=_raw "User:\s(?<user_id>\d+)\s\(LeaveRoom\):\s(?<current_room_id>\d+)" | rex field=_raw "User:\s(?<user_id>\d+)\sSelected\s(?<current_room_id>\d+)" | eval action=if(like(_raw, "%ChooseRoom%"), "ChooseRoom", null) | where isnotnull(action) | sort user_id _time | eventstats latest(current_room_id) as latest_room by user_id | streamstats count as count_value by user_id current_room_id reset_on_change=true | where current_room_id=latest_room AND count_value=1 | stats latest(_time) as chosen latest(current_room) as current_room by user_id current_room_id | eval current_occupation_duration=tostring((now() - chosen), "duration") | eval current_chosen_time=strftime(chosen, "%Y-%m-%d %H:%M:%S") | fields - chosen] | table user_id, current_room, current_room_id, current_chosen_time, current_occupation_duration, last_room, last_room_id, last_chosen_time, last_left_time, last_occupation_duration   Now I want to use the times in this following query as the latest times that the above query checks for. I want the variable_time to be the latest value. Once I do this I'll be able to check if there was already a user in the room at the time in this second query. Then I think I can use a where clause to compare if the next location in this second query had a user in it based on results from the first query.   index=INDEX host=HOSTNAME sourcetype=SECOND_SOURCETYPE | rex field=_raw "UserId:\s(?<user_id>\d+)\scurrent\slocation:\s(?<current_loc>\w+)\snext\slocation:\s(?<next_loc>\w+)" | eval variable_time=_time | table user_id, current_loc, next_loc, variable_time   How would I be able to use that variable_time as the latest time in the first query?
I have basic web logs with username and jsessionid. I want to group (assume a single index, with one set of data). So thousands of events.  I want to group by jsessionid and username - creating supe... See more...
I have basic web logs with username and jsessionid. I want to group (assume a single index, with one set of data). So thousands of events.  I want to group by jsessionid and username - creating supergroups. Example: username:jsessionid tom:1234 frank:1234 bob:1234 bob:5467 sally:5467 sally:9012 amy:9012 harry:4709 tony:4709 I would wind up with 2 groups - a small group with just harry and tony, and a larger group with tom, frank, bob, sally, and amy due to shared jsessionid. I would like my output to contain some kind of group ID or Group Name. I would have no knowledge of username or jsessionid - I just want to be able to loop through the data and assign users/jsessionids to groups where they exist.  My first thought is to sort by jsessionid, but I can't figure out how to loop through the data and create dynamic group names.  Thanks for any ideas, not an SPL expert. 
Hi, I am setting up Splunk Arcitecture.To start, After installing the tar file how do we configure that tar to act as Heavy Forwader? what is the configuration file which make it as HF? Any sugges... See more...
Hi, I am setting up Splunk Arcitecture.To start, After installing the tar file how do we configure that tar to act as Heavy Forwader? what is the configuration file which make it as HF? Any suggestion or doc please?  
i am receiving the splunk alerts from the mail  after that when i click on the "view result"  i am getting this error "The search you requested could not be viewed" but the alert owner (splunk admins... See more...
i am receiving the splunk alerts from the mail  after that when i click on the "view result"  i am getting this error "The search you requested could not be viewed" but the alert owner (splunk admins) opened the same link its opened and result also visible from their end.
I am looking to calculate per second transactions but when doing so through either stats or a timechart I am hitting the limits of these functions. This is in a dashboard with a time selector so it c... See more...
I am looking to calculate per second transactions but when doing so through either stats or a timechart I am hitting the limits of these functions. This is in a dashboard with a time selector so it can be variable. Is there any way to protect against running over the limits in a search? I was using the per_second function in timechart. Has anyone found a better way to handle getting average, min/max at a per second interval?     timechart span=5m per_second(count) as TPS     
Trying to get the rex command to extract the last name when the user field has multiple formatting outputs below. Is there a way to incorporate both options into a rex command? | rex field=user "(?<... See more...
Trying to get the rex command to extract the last name when the user field has multiple formatting outputs below. Is there a way to incorporate both options into a rex command? | rex field=user "(?<user_last_name>[A-Za-z]+),.*"  Smith, Bob bob.t.smith.abc
Hey all, so im trying to generate a time chart. If i perform the the stats command to validate the number of state I get the number im looking for with this query. |stats latest(*) AS * by ip, plugi... See more...
Hey all, so im trying to generate a time chart. If i perform the the stats command to validate the number of state I get the number im looking for with this query. |stats latest(*) AS * by ip, pluginID | dedup macAddress, Datacenter | stats count(state) as Fixed by cve So now I wanted to transform the count of state over to a timechart but when I do this I get no data at all. |stats latest(*) AS * by ip, pluginID | dedup macAddress, Datacenter | timechart count(state) as Fixed by cve useother=false   Im pretty new to the timechart command, any help would be greatly appreciated!   Thanks!
Hi All, I am searching App/Add-on to consume or receive the Email in Splunk cloud. Here is my use case - I have a 4-email server such as - Gmail, Yahoo, Hotmail, and outlook from where I use to rec... See more...
Hi All, I am searching App/Add-on to consume or receive the Email in Splunk cloud. Here is my use case - I have a 4-email server such as - Gmail, Yahoo, Hotmail, and outlook from where I use to receive the emails very frequently for some use cases. Here I want to onboard these emails in Splunk, OR consume/receive these emails in Splunk and interpret them as an event. I came across some of the APPs - https://splunkbase.splunk.com/app/3200 https://splunkbase.splunk.com/app/1739 But did not figure out which is best for my case. Can anyone please help here to identify the best one for my use-case or any other best possible to achieve this? Thanks.
Hi , I am trying to break events which are merging for SMS and SMPP logs. only the events with binary codes are breaking and rest are still merging.Can anyone advice how I can break events here. P... See more...
Hi , I am trying to break events which are merging for SMS and SMPP logs. only the events with binary codes are breaking and rest are still merging.Can anyone advice how I can break events here. Props I am using is as below KV_MODE = none BREAK_ONLY_BEFORE = \d{2}:\d{2}:\d{2}:\d{3}\s+(\d+\w+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = true and  KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false  TIME_FORMAT=%H:%M:%S:%3N   09:55:26:008 (000005A0) --IP--  --: WaitForResponseSMPP: SMPP Debug: ioctlsocket failed, no data 09:55:26:935 (000007B8) --IP--  --: WaitForResponseSMPP: SMPP Debug: ioctlsocket failed, no data 09:55:27:347 (000007D0) --IP--  --: WaitForResponseSMPP: SMPP Debug: received a submit message 09:55:27:347 (000007D0) --IP--  <-: 103 byte packet 09:55:27:347 (000007D0) --IP--  <-: 00 00 00 67 00 00 00 04 00 00 00 00 00 05 5E C1 g ^ 09:55:27:347 (000007D0) --IP--  <-: 00 00 00 36 30 30 30 30 30 30 34 00 00 00 35 32 60000004 52 09:55:27:347 (000007D0) --IP--  <-: 69 6D 57 52 36 4A 73 2F 69 31 69 41 47 4F 45 4D imWR6Js/i1iAGOEM 09:55:27:347 (000007D0) --IP--  <-: 71 75 6E 52 6E 61 71 qunRnaq   SMSDebug log 10:00:11:467 [21] CHECKLF0004###0010\5F7ACFDA.REQ: WAIT 10:00:11:467 [23] CHECKLF0004LF0004###0010\5F7ACFDA.REQ: WAIT 10:00:11:640 [22] VWPRODEGOLF0004###0010\5F7ACFDA.REQ: WAIT 10:00:11:815 [5] ThreadListenForSMPPConnections: Before accept 10:00:11:815 [5] ThreadListenForSMPPConnections: After accept 10:00:11:815 [29] ThreadProcessSMPPConnection: Processing SMPP connection from IP... 10:00:11:908 [28] ThreadProcessSMPPConnection: Releasing SMPP connection from IP 10:00:11:909 [28] WaitForSocketClose: WinSock reported ioctlsocket complete
Why do some UFs show as missing by Monitoring console & active when viewing the MC ? And the number of them keep going up & down ! What needs to checked please?
I have a summary index that I created from existing index by using tstats command. when I try to use tstats on the summary index I get no results (stats will work) any idea why?
I have two lookups B1.csv and B2.csv. B1 has block member and B2 has block id and both have one same column departments. I want to compare these both with departments and get matching values of Block... See more...
I have two lookups B1.csv and B2.csv. B1 has block member and B2 has block id and both have one same column departments. I want to compare these both with departments and get matching values of Block member and BLOCK ID. I also have index Z from which I am done search along with my two looks ups  B1: BlockMember --- Department--email B2: BlockID --Department Index and B1 has email as same values so I used "lookup B1.csv" email command and got block member in my table but now I am not sure how to get blockid from b2.  My current search index=Z  pipename=static-website* |lookup b1 email |rename member AS BlockMember (got this blockmember from above lookup b1 using email from my index) |stats count by grid BlockMember  Status current table: grid----status--Blockmember so my future table should be  grid----status--BlockID--Blockmember(which will have same department)
Hello, I'm using metadata on hosts to get their first event time etc, are they accurate even on oldest records? | metadata type=hosts By the way where are they stored? Thanks
Do the scripts you place in opt/splunk/bin/scripts Remain persistent even after upgrades? Can someone provide documentation that explains? Note: the scripts we need are not for the app context, so ... See more...
Do the scripts you place in opt/splunk/bin/scripts Remain persistent even after upgrades? Can someone provide documentation that explains? Note: the scripts we need are not for the app context, so placing them there for us won't work. Thanks        
Splunk Enterprise version 7.1.0 and higher supports rolling upgrade so steps would be.  Upgrade the master node/Cluster master  Perform a rolling upgrade of a search head cluster  Perform a rolli... See more...
Splunk Enterprise version 7.1.0 and higher supports rolling upgrade so steps would be.  Upgrade the master node/Cluster master  Perform a rolling upgrade of a search head cluster  Perform a rolling upgrade of an indexer cluster  Upgrade heavy forwarder and Universal forwarder  But when do we upgrade Deployment server before or after upgrade?  any recommendation for kvstore and app kv store. Please share any challenges I might face during upgrade or any tips.  ta   
<?xml version="1.0" encoding="UTF-8"?><message> <software-version>4.1.1810-65</software-version> <source>pia</source> <spec-version>6</spec-version> <message-name>roll-manifest-data</message-name... See more...
<?xml version="1.0" encoding="UTF-8"?><message> <software-version>4.1.1810-65</software-version> <source>pia</source> <spec-version>6</spec-version> <message-name>roll-manifest-data</message-name> <roll-summary> <output-roll-id>A00812135</output-roll-id> <input-rolls/> <group-jobs/> <jobs> <job type="diagnostic" distance-run="1583" units="inches" id="spit-page-lead-in-sfhc-hi_0xqo6"/> <job type="single" distance-run="32635" units="inches"> <job-submission-id>712537-15788-KPIC_PG-116-KPIC-00768.s1</job-submission-id> <frames> <printed-ok>29400</printed-ok> <printed-error>0</printed-error> </frames> </job> <job type="single" distance-run="326354" units="inches"> <job-submission-id>712537-15788-KPIC_PG-116-KPIC-0094.s1</job-submission-id> <frames> <printed-ok>29500</printed-ok> <printed-error>0</printed-error> </frames> </job> </jobs> </roll-summary> <jobs> <job> <customer-job-id>spit-page-lead-in-sfhc-hi</customer-job-id> <job-submission-id>spit-page-lead-in-sfhc-hi_0xqo6</job-submission-id> <job-manifest> <start-range> <sequence>0</sequence> <side-a> <universal-frame-id>spit-page-lead-in-sfhc-hi_0xqo6-copy001-frame00</universal-frame-id> <copy-number>1</copy-number> <copy-relative-frame-number>1</copy-relative-frame-number> <copy-relative-signature-number>1</copy-relative-signature-number> <side>side-a</side> <serial-number>2155475</serial-number> <printed-timestamp>2021-03-23T07:13:44.091</printed-timestamp> </side-a> <side-b> <universal-frame-id>spit-page-lead-in-sfhc-hi_0xqo6-copy001-frame</universal-frame-id> <copy-number>1</copy-number> <copy-relative-frame-number>2</copy-relative-frame-number> <copy-relative-signature-number>1</copy-relative-signature-number> <side>side-b</side> <serial-number>215548</serial-number> <printed-timestamp>2021-03-23T07:14:09.460-0700</printed-timestamp> </side-b> </start-range> <end-range> <sequence>1</sequence> <side-a> <universal-frame-id>spit-page-lead-in-sfhc-hi_0xqo6-copy001-frame</universal-frame-id> <copy-number>1</copy-number> <copy-relative-frame-number>7</copy-relative-frame-number> <copy-relative-signature-number>4</copy-relative-signature-number> <side>side-a</side> <serial-number>215553</serial-number> <printed-timestamp>2021-03-23T07:13:44.819</printed-timestamp> </side-a> <side-b> <universal-frame-id>spit-page-lead-in-sfhc-hi_0xqo6-copy-frame</universal-frame-id> <copy-number>1</copy-number> <copy-relative-frame-number>8</copy-relative-frame-number> <copy-relative-signature-number>4</copy-relative-signature-number> <side>side-b</side> <serial-number>215554</serial-number> <printed-timestamp>2021-03-23T07:14:09.888</printed-timestamp> </side-b> </end-range> <start-printing> <sequence>2</sequence> <timestamp>2021-03-23T07:13:44.228</timestamp> </start-printing> <start-printing> <sequence>3</sequence> <timestamp>2021-03-23T07:13:58.152</timestamp> </start-printing> <end-printing> <sequence>4</sequence> <timestamp>2021-03-23T07:14:09.890</timestamp> <position units="inches">1549.27834</position> </end-printing> </job-manifest> <content-metadata/> </job> </jobs> </message> Above i have given the sample data. Here i have to get the data from <jobs>. Expected output is customer-job-id serialnumber(start-range -> side-a) serialnumber(start-range -> side-b) serialnumber(end-range -> side-a) serialnumber(end-range -> side-a)           I have three scenarios: 1)Inside <jobs> some <job> will have all the data correctly (ex : first job). In that case i need the output as below customer-job-id serialnumber(start-range -> side-a) serialnumber(start-range -> side-b) serialnumber(end-range -> side-a) serialnumber(end-range -> side-a) 712537-15789-KPIC_PG-120-KPIC-001 215547 215548 215553 215554