All Topics

Top

All Topics

Utilizing Splunk cloud, I have created quite a few notable events and correlation searches that function normally but I cant figure out why several alarms show as informational under incident review ... See more...
Utilizing Splunk cloud, I have created quite a few notable events and correlation searches that function normally but I cant figure out why several alarms show as informational under incident review as they are intended to be a Medium priority.   the query does not currently contain anything to change the urgency.
The code below does create a table/ scatter plot showing the warnings per hour of day.  All warnings between 00:00 and 00:59 will be counted/ listed in date_hour 0, warnings from 01:00 until 01:59 ... See more...
The code below does create a table/ scatter plot showing the warnings per hour of day.  All warnings between 00:00 and 00:59 will be counted/ listed in date_hour 0, warnings from 01:00 until 01:59 will be counted in date_hour 1, .... If the time range is set across multiple days I still have the date_hours 0...23 all the warnings will be added independent of the date.   index="..." sourcetype="..." | strcat opc "_" frame_num "_" elem_id uniqueID | search status_1="DRW 1*" | stats count as warning by date_hour, uniqueID | table uniqueID date_hour warning   For the kind of evaluation we're doing we would need to have shorter counting intervals than the one given by date_hour, e.g. 20 minutes. The question is: how to update the code to get counting intervals smaller than one hour??? Below I managed to reduce the time interval to 20 minutes.     index="..." sourcetype="..." | strcat opc "_" frame_num "_" elem_id uniqueID | search status_1="DRW 1*" | bin _time span=20m as interval | stats count as warning by interval, uniqueID | table uniqueID interval warning    But: the date is still in there so I have a 20 min counting interval one day after the other. And the interval string is no more human readyble in the table.  Any help is appreciated.
Hi, I need to include EUM_SYNTHETIC_SESSION_DETAILS  URL in the alert. What is the Variable name for this value? Actually, EUM_SYNTHETIC_SESSION_DETAILS  URL gives the latest Synthetic Job sessio... See more...
Hi, I need to include EUM_SYNTHETIC_SESSION_DETAILS  URL in the alert. What is the Variable name for this value? Actually, EUM_SYNTHETIC_SESSION_DETAILS  URL gives the latest Synthetic Job session details.
Hello! I have a dashboard I have scheduled to be exported to PDF on the first day of every month at 1am. The problem is that, although the queries and values presented on the dashboard are correct,... See more...
Hello! I have a dashboard I have scheduled to be exported to PDF on the first day of every month at 1am. The problem is that, although the queries and values presented on the dashboard are correct, most of the times the resulting PDF has values set to 0. For example, let's say i have 4 single value panels (A, B, C and D) on my dashboard and that those 4 panels have the following values: A ->10 B -> 45 C -> 94 D -> 2 When I export it to PDF some of those panels appear with value 0 in the PDF. Sometimes one of them, sometimes two of them, sometimes all of them...  And sometimes all of them appear with the correct values. It's random. The same happens with charts that are also on the dashboard, they will randomly appear in the exported PDF with "No results found.". This is less frequent then the single value panels situation but I assume the source of the problem is the same.   Any idea what the problem might be?   Thank you!
Hello, I wanted a EVAL statement which manually adds a specified time may be "00:00:00" for the event containing only date component in them. Example of the file: (psv format) Poojitha Vasanth|... See more...
Hello, I wanted a EVAL statement which manually adds a specified time may be "00:00:00" for the event containing only date component in them. Example of the file: (psv format) Poojitha Vasanth|21644|669194|Poojitha Vasanth|02/19/18|PRE-CLINIC VISIT| Current sourcetype: [sample:xx:audit:psv] EVAL-event_dt_tm = date FIELD_NAMES = "prsnl_name","prsnl_alias","person_alias","person_name","date","event_name" TIMESTAMP_FIELDS = "date" And, I have modified it to. EVAL-time = "00:00:00" EVAL-event_dt_tm = date.time FIELD_NAMES = "prsnl_name","prsnl_alias","person_alias","person_name","date","event_name" TIMESTAMP_FIELDS = "date","time"   Even after this change, I am getting the ingested date and time and the actual log time. Could anyone please let me know where I have gone wrong?
I've tried to configure some reports to be send via email. I created a report which runs on a schedule an then send the report via mail. I receive an error like this: ERROR ScriptRunner [26364 Aler... See more...
I've tried to configure some reports to be send via email. I created a report which runs on a schedule an then send the report via mail. I receive an error like this: ERROR ScriptRunner [26364 AlertNotifierWorker-3] - stderr from 'C:\Program Files\[..]\sendemail.py "results_link=https://sea:443/app/search/@go?sid=scheduler__myuser__search__RMD5ca1c47b4433f8dbe_at_1675331100_258_5A6150E4-7C97-409F-AC0E-5BC487885B82" "ssname=xxx: myreport" "graceful=True" "trigger_time=1675331129" results_file="C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__myuser__search__RMD5ca1c47b4433f8dbe_at_1675331100_258_5A6150E4-7C97-409F-AC0E-5BC487885B82\results.csv.gz" "is_stream_malert=False"': ERROR:root:[WinError 10061] Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte while sending mail to: mail@mail.org I thougt it might be the smtp-gateway but sending mails via | sendemail command works fine.  Also some, but very few, reports go through. Also I checked wether sendmail.py tried to open a connection to the smtp-server but it seems the error comes from sendemail.py trying to open the url in results_link. And that is the point where I do not know what to look for anymore.
I have the following search which returns a table of all hostnames and operating systems. | inputlookup hosts.csv | search OS="*server*" | table hostname, OS I would like to add a checkbox to e... See more...
I have the following search which returns a table of all hostnames and operating systems. | inputlookup hosts.csv | search OS="*server*" | table hostname, OS I would like to add a checkbox to exclude Windows Server 2008 builds. This is what I have so far: <row> <panel> <input type="checkbox" token="checkbox" searchWhenChanged="true"> <label></label> <choice value="Windows Server 2008*">Exclude Server 2008</choice> <change> <condition match="$checkbox$==&quot;Enabled&quot;"> <set token="setToken">1</set> </condition> <condition> <unset token="setToken"></unset> </condition> </change> </input> </panel> </row>   New panel to show server builds depending on the checkbox: <query> | inputlookup hosts.csv | search OS="*server*" AND OS!="$checkbox$" | stats count as total <query> This only works when the checkbox is selected and correctly excludes the 2008 builds from the search, but doesn`t display anything when the checkbox is unselected. I would like to display all devices when the  checkbox is unselected.
I am having 2 index - abc - FieldA, E, F bcz - Field B, C, D. Where I want to return D, C and F where value from field E to match with B. I am getting the required output but not able to get F values... See more...
I am having 2 index - abc - FieldA, E, F bcz - Field B, C, D. Where I want to return D, C and F where value from field E to match with B. I am getting the required output but not able to get F values. This is my query: index=abc fieldA="<>" | rex field=_raw .......FieldB .... FieldC.. Field D | search [ index=bcz FieldF="<>" | rename FieldE as FieldB | fields FieldB] | stats count as Total by _time, FieldD, FieldC, FieldF | where FieldD="<>"  
Hi, I am running a Search Head Cluster with 7 search heads on Splunk 8.2.9. 2 of the search heads are generating the following error messages at ~5 second intervals for a period of time before stopp... See more...
Hi, I am running a Search Head Cluster with 7 search heads on Splunk 8.2.9. 2 of the search heads are generating the following error messages at ~5 second intervals for a period of time before stopping:     ERROR DigestProcessor [38271 TcpChannelThread] - Failed signature match ERROR HTTPAuthManager [38271 TcpChannelThread] - Failed to verify HMAC signature, uri: /services/shcluster/member/consensus/pseudoid/raft_request_vote?output_mode=json     The search head cluster is otherwise running as expected as far as I can tell. The search heads that are producing these errors are the only 2 that have been elected as captain in the last 30 days from examining the logs. There are no preferred captain or similar configurations set. I have checked the [shclustering] pass4SymmKey values on each search head. They are all configured to the same value although use different Splunk Secrets to encrypt. I am not sure when the errors first started appearing so can't link this to a specific upgrade on configuration change unfortunately. The thread_id values seem to stay around for between 10-30 minutes. Sometimes 2 thread_ids will be active at once, sometimes none are active for a period. When looking at other logs for a particular thread_id around the same time period (at info logging level) I can't find see anything that adds any more cluses to what is causing the errors.  
we have integration with EventHub using Splunk_TA_microsoft-cloudservices we see that events are missing  what might be the reason  ?  in case the event reached the EventHub with delay , will t... See more...
we have integration with EventHub using Splunk_TA_microsoft-cloudservices we see that events are missing  what might be the reason  ?  in case the event reached the EventHub with delay , will the APP pull the data  ? how much time back the APP is scanning the data  ?
Hi, I am having trouble for routing the logs(first.txt) to separate index1/2 and second.txt to index3/4.   below are my environment inputs.conf [monitor:///home/odelakumar06/first.txt] dis... See more...
Hi, I am having trouble for routing the logs(first.txt) to separate index1/2 and second.txt to index3/4.   below are my environment inputs.conf [monitor:///home/odelakumar06/first.txt] disabled = false host = hf index = firstone sourcetype = firstone _TCP_ROUTING = FirstGroupIndexer [monitor:///home/odelakumar06/second.txt] disabled = false host = hf index = secondone sourcetype = secondone _TCP_ROUTING = SecondGroupIndexer and my outputs.conf is [tcpout] defaultGroup = FirstGroupIndexer,SecondGroupIndexer [tcpout:FirstGroupIndexer] disabled = false server = 34.100.154.111:9997,35.244.6.201:9997 [tcpout:SecondGroupIndexer] disabled = false server = 34.100.190.134:9997,34.93.239.18:9997 and i have one SH and i added all the above indexes in SH.  when i search in SH index=firstone, nothing i am getting. when i see splunkd log getting below errors. Please suggest   02-02-2023 06:33:10.051 +0000 ERROR TcpInputProc [1983 FwdDataReceiverThread] - Message rejected. Received unexpected message of size=1195725856 bytes from src=162.142.125.9:49748 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. host = indx-1 source =/opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd  
Hi Guys, Less Event displayed while searching as * then search hostname while its showing if I search at the beginning with hostname  Please suggest why it is misbehaving and what is the Solution... See more...
Hi Guys, Less Event displayed while searching as * then search hostname while its showing if I search at the beginning with hostname  Please suggest why it is misbehaving and what is the Solution for  this to get all events  .   Thanks in advance. 
| inputlookup suspicious_win_comm.csv lookup table contents has only keyword keyword <- field name tasklist ver ipconfig net time ... See more...
| inputlookup suspicious_win_comm.csv lookup table contents has only keyword keyword <- field name tasklist ver ipconfig net time systeminfo netstat whoami chrome   I want see result like this.   event.commandline matching keyword c:\program<file:///c:/program>; files (x86)\application\chrome.exe --signal-argument http............ chrome   I used this spl my local system. Index=crowdstrike [| inputlookup suspicious_win_comm.csv | eval event.commandline = "*".keyword."*" | fields event.commandline | format ]  
is it possible to splunk 6 version to version 9 rolling upgrade?
  (index="external*" Feedback* "Text") | transaction channel startswith=POST endswith=received maxspan=1m maxevents=2 | xmlkv | dedup ip1 | table ip1 | appendcols [ search index="internal*" "Missi... See more...
  (index="external*" Feedback* "Text") | transaction channel startswith=POST endswith=received maxspan=1m maxevents=2 | xmlkv | dedup ip1 | table ip1 | appendcols [ search index="internal*" "Missing" | eval fields=split(_raw,"|") | eval ip2=mvindex(fields,7) | dedup ip2 | table ip2   I have two separate searches outputting to a table, the first for ip1 which has a small number of ip addresses (10-20 usually). the second for ip2 has a much larger number of ip addresses (1000+ usually). Eg. output might look like  192.168.1.1 192.1681.4 192.168.1.20 192.168.4.50 172.20.20.1 172.1.2.10 192.168.1.60 192.168.4.68 12.140.14.30 13.150.34.40   100.149.50.4   192.168.1.60   172.27.27.4   172.40.40.3   infinity I need to cross reference the full list/range of ip2, for any occurences of ip's from ip1, and maybe have these display in a new column. however this seems to be proving difficult for what i keep trying, and i can only seem to search the output on a row by row comparison basis which is not what i need and will almost never get a result eg. ip1row1 does not match ip2row1, which goes down to ip1rowX & ip2rowX and then multiple empty ip1rows comparing to ip2rowX+ infinity.
Basically I have a set of raw data with different time stamp in CCYYMMDDHHMMSS format. I want to list out the stats which shows how many occurrences of CCYY then MM then DD . I am able to use STRFTIM... See more...
Basically I have a set of raw data with different time stamp in CCYYMMDDHHMMSS format. I want to list out the stats which shows how many occurrences of CCYY then MM then DD . I am able to use STRFTIME to get the segregate the data into desired format as year month and day. My expected result output is Year Year Count Month Month Count Day DayCount 2022 1000 2022-11 250 2022-11-27 20 2023 10 2022-12 100 2022-11-12 5         2022-11-27 35 I used the below |stats count as total by year, month day But the actual output is not as expected Year Year Count Month Month Count Day DayCount 2022 20 2022-11 20 2022-11-27 20 2022 5 2022-12 5 2022-11-12 5 2022 35 2022-27 35 2022-11-27 35   Should be simple enough, just not for me. Please help. Thanks!
I have a json source with input via a Splunk Add-on for AWS input. Sometimes there's a timestamp-like field, sometimes not, so I chose to use a sourcetype with Timestamp handling set to "Current time... See more...
I have a json source with input via a Splunk Add-on for AWS input. Sometimes there's a timestamp-like field, sometimes not, so I chose to use a sourcetype with Timestamp handling set to "Current time" in the GUI which I think sets DATETIME_CONFIG=CURRENT in the sourcetype's props.conf entry.  I expected this to mean that's where the events would get their timestamp from, but log messages in splunkd.log (according to what I see in index=_internal) are still showing that DateParserVerbose is spitting out warnings that "A possible timestamp match ... is outside of the acceptable time window" occurring when events with no recognisable time are indexed. This also means that some events get seemingly random timestamps if some string gets misinterpreted; a handful of events generated yesterday had timestamps in November 2021. Was I wrong expecting a sourcetype's DATETIME_CONFIG worked that way? If not, what might be happening to stop my intended timestamping? If so, how else should I handle events with no timestamps? Thanks for any advice.
I am working on the saved search not index/lookup. I tried this code -  | eval date=strftime(strptime(<fieldname>,"%Y-%m-%d %H:%M:%S"), "%m-%d-%Y %H:%M:%S") but getting the blank data. Pls help  
Yes indexer clustering. I set up 3 win 10 machines with Splunk Enterprise on them and got them to initially connect to master indexer but then got this error. on same dns and firewall turned off on... See more...
Yes indexer clustering. I set up 3 win 10 machines with Splunk Enterprise on them and got them to initially connect to master indexer but then got this error. on same dns and firewall turned off on all 3 machines.   thanks    
Hello, I have an array of timeline event. Timeline: [ [-]        { [-]          deltaToStart: 788          startTime: 2023-02-01T21:56:11Z          type: service1        }        { [-]    ... See more...
Hello, I have an array of timeline event. Timeline: [ [-]        { [-]          deltaToStart: 788          startTime: 2023-02-01T21:56:11Z          type: service1        }        { [-]          deltaToStart: 653          startTime: 2023-02-01T21:56:11.135Z          type: service2        }      ] I would like to table deltaToStart value only of type service1.    Thanks.