All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And what if the system crashed and didn't generate proper events? It's not a reliable monitoring technique.
And how do you decide which label is the one you want? Your examples aren'r consistent here in term of the label order.
My Search are as follow: sourcetype = linux_audits (type=system_shutdown) OR (type=system_reboot) | table ... I would like to have a table display the following: 1. host 2. time (of when system_s... See more...
My Search are as follow: sourcetype = linux_audits (type=system_shutdown) OR (type=system_reboot) | table ... I would like to have a table display the following: 1. host 2. time (of when system_shutdown happen) 3. time ( of when system_reboot happen) 4.  duration (of how long that take the system been down for) How do i do that?          
Hi @Derson, Thank you for the nice and detailed explanation and the test results. I cannot think of anything else to say. I agree with you that may be a bug, it seems better to open a support case ... See more...
Hi @Derson, Thank you for the nice and detailed explanation and the test results. I cannot think of anything else to say. I agree with you that may be a bug, it seems better to open a support case for this issue.   
Well... Splunk sometimes does some unintuitive casts between number and string and it's just how it is, I think although so far I've only encountered the opposite situations - the need to explicitly ... See more...
Well... Splunk sometimes does some unintuitive casts between number and string and it's just how it is, I think although so far I've only encountered the opposite situations - the need to explicitly call tonumber() because the field was a string even though it looked like a number. Typing in Splunk isn't that strict (maybe except for datamodels) so I think you just have to get used to it. BTW, you can use typeof() to see the type of a field.
Hoping this is something simple with lookahead/lookback that I'm missing... trying to extract multi-line fields from ANSI 835 files indexed in chunks by line count, so 10K line events (unfortunately,... See more...
Hoping this is something simple with lookahead/lookback that I'm missing... trying to extract multi-line fields from ANSI 835 files indexed in chunks by line count, so 10K line events (unfortunately, I have no control over the sourcetype / event breaking for these).  My rex is matching the pattern, but after the first match it skips the second and matches the third.  Then it skips the fourth and matches the fifth, etc.  The capture groups start and ends with the same pattern (CLP*), and there can be all kinds of variations in the number of lines, type of lines (starting characters), number of * delimited fields (without or without values) in each line, and multiple types of special characters.  The constants are the tilde ~ line breaks, and that I need everything between each CLP* occurrence.  In the example 835 below, I would need to have three multi-line fields extracted starting with (1) 77777777*, then (2) 77777778*, and (3) 77777779*, but my rex is only getting (1) and (3).  Also, I know there are some redundancies (m and n+, etc), doesn't appear they're impacting the results... though happy to eat that sandwich if I'm wrong.  Any help with this would be much appreciated! Cheers!   | rex max_match=0 "(?msi)CLP\*(?P<clmevent>.*?)\n+\CLP\*"   Example 835: N4*Carson*NV*89701~ PER*BL*Nevada Medicaid*TE*8776383472*EM*nvmmis.edisupport@dxc.com~ N1*PE*SUMMER*XX*6666666666~ REF*TJ*111111111~ CLP*77777777*4*72232*0**MC*6666666666666~ CAS*OA*147*50016*0~ CAS*CO*26*22216*0~ NM1*QC*1*TOM*SMITH****MR*77777777777~ NM1*74*1*ALAN*PARKER****C*88888888888~ NM1*PR*2*PACIFI*****PI* 9999~ NM1*GB*1*BARRY*CARRY****MI*666666666~ REF*EA*8888888~ DTM*232*20180314~ DTM*233*20180317~ SE*22*0001~ ST*835*0002~ BPR*H*0*C*NON************20180615~ TRN*1*100004765*5555555555~ DTM*405*20180613~ N1*PR*DIVISON OF HEALTH CARE FINANCING AND POLICY~ N3*1100 East William Street Suite 101~ N4*Carson*NV*89701~ PER*BL*Nevada Medicaid*TE*8776383472*EM*nvmmis.edisupport@dxc.com~ N1*PE*VALLEY*XX*6666666666~ REF*TJ*530824679~ LX*1~ CLP*77777778*2*3002*0**MC*6666666666667~ CAS*OA*176*3002*0~ NM1*QC*1*BOB*THOMAS****MR*55555555555~ NM1*74*1*ALAN*JACKSON****C*66666666666~ REF*EA*8888888~ DTM*232*20171001~ DTM*233*20171002~ CLP*77777779*4*41231.04*0**MC*6666666666668~ CAS*OA*147*9365.04*0~ CAS*CO*26*31866*0~ NM1*QC*1*HELD*ALLEN****MR*77777777778~ NM1*74*1*RYAN*LARRY****C*88888888889~ NM1*PR*2*SENIOR*****PI* 8888~
I have filed "Labels" with multiple value in the single filed. I need to see only OS value red hat(linux) or windows 2019 I  tried eval in SPL but as a result I gut eather first value or empty cell... See more...
I have filed "Labels" with multiple value in the single filed. I need to see only OS value red hat(linux) or windows 2019 I  tried eval in SPL but as a result I gut eather first value or empty cell. Thank you. Please see eval statement and sample data below. -------- | eval Labels= split (Labels, " ") ------------------------------ Sample data before eval development red hat(linux) main_ucmdb or contingency production red hat(linux) main_ucmdb or  production windows 2019 wintel server microsoft windows server 2019 standard main_ucmdb
As @scelikok said, your description is a bit vague and without a sample (anonymized if needed - we don't need your internal secrets ;-)) of your data and description of what data you want to get from... See more...
As @scelikok said, your description is a bit vague and without a sample (anonymized if needed - we don't need your internal secrets ;-)) of your data and description of what data you want to get from that it's pretty much impossible to help you because we have no idea what we're talking about.
OK. Maybe you misunderstand how Splunk works. You don't "connect splunk to a linux server". You install UF on a server and (and that might be one of the parts you're missing) you're making it send ev... See more...
OK. Maybe you misunderstand how Splunk works. You don't "connect splunk to a linux server". You install UF on a server and (and that might be one of the parts you're missing) you're making it send events to Splunk. So, did you verify any of those things I asked you earlier?
Hi @pranay03, Since your otel collector is already installed without start_at=beginning parameter, you should remove otel , delete checkpoint files on your nodes under "/var/addon/splunk/otel_pos/" ... See more...
Hi @pranay03, Since your otel collector is already installed without start_at=beginning parameter, you should remove otel , delete checkpoint files on your nodes under "/var/addon/splunk/otel_pos/" folder, and install otel again.. This should make otel to re-read all files. 
This is due to issues with load balancers sitting in front of the hosts. Check the stickiness of the load balancers. Accessing a SH member directly should solve it.
We are using 'Splunk App for Lookup File Editing' version 4.0.1.  There are two issues that bother me.  First is the continuous popup about 'Save Backup'.  I need a way of turning this off since most... See more...
We are using 'Splunk App for Lookup File Editing' version 4.0.1.  There are two issues that bother me.  First is the continuous popup about 'Save Backup'.  I need a way of turning this off since most of the time my edits are of adhoc lookups and I don't want a backup.  Second is the can't save error message I get in Firefox.  I have to quit Firefox, restart, and then I can save.  After a number of adhoc lookups I get the error message again.  Any work arounds beyond restarting the browser?
Hi @Anurag101, It is not possible to help without knowing your data. Also, the solution depends on whether data is extracted or not.  If you can show some sample anonymized events, we can help by sa... See more...
Hi @Anurag101, It is not possible to help without knowing your data. Also, the solution depends on whether data is extracted or not.  If you can show some sample anonymized events, we can help by sample query.
Hey @scelikok,  Thanks for jumping in to look at this.  I agree that part of the problem is the typing here because it only returns the wrong lookup uID values for secondUID values that could b... See more...
Hey @scelikok,  Thanks for jumping in to look at this.  I agree that part of the problem is the typing here because it only returns the wrong lookup uID values for secondUID values that could be reasonably interpreted as numbers. And maybe this comes down to poor error handling.  I wouldn't be satisfied just writing this off to a typing issue though for multiple reasons: First (opaque), this would be completely opaque to the end user.  Maybe you have enough experience to tell me, but if it is a typing issue, then the typeof command does not return the actual types of the variables. I have to believe it returns what Splunk expects it would be. For instance in the search below, the last |like command does not remove any results.  | inputlookup kvstore_560k_lines_long max=15000 | stats count by secondUID | where count=1 AND match(secondUID, "^\d+$") | head 4000 | eval initialType=typeof(secondUID) | eval secondUID=tostring(secondUID) ```this line causes the search to return different results``` | eval subsiquentType=typeof(secondUID) | where like(initialType, subsiquentType) Second (inconsistent), casting the secondUID with tonumber() makes more wrong results be returned. So whatever is happening is not consistently happening for secondUIDs that are integers. I initially found this by rexing a dashboard token list of secondUIDs so it isn't a matter of how the variables are being initially loaded from the lookup table. This is also shown by needing the eval command to cause the error which is distilled logic from an if statement. And as far as the inconsistency goes, there is no pattern to the numbers that are treated as numbers vs strings. Its not like all numbers starting with 0s have this happen or whitespace being around some.  Third (unexpected lookup function), if you looked at any of the events they would look correct. The values for secondUID would be correct and they have results returned for uID. Unless this is just another quirk of Splunk where wrong lookups are possible without warning, I would expect the |lookup to output a null() value when it is unable to use the key it is given.  Fourth (noticing the error), because Splunk doesn't give a warning and outputs a result, anybody using dedup or stats without a predetermined output expected would likely never know this exists. The only way I noticed it was because I knew there should be a 1-1 relation between my keys and values and there wasn't at the end of the search.  Fifth (event count dependent), since there is a higher % of errors happening when the initial event count goes up without a pattern in the secondUID values being passed in other than being "\d+", this makes me worried there is a memory allocation/referencing bug behind the scenes. Something which would probably also be fixed by tostring if this was the case. Counterpoint being that it reliably returns the same wrong lookup values. Maybe it really is just poor error handling in |lookup combined with typeof() being a lie. But I don't know how to come to a definitive conclusion without having access to the Splunk code (c/c++?) behind the scenes, or analyzing Splunk while it is running to rule out an allocation/referencing issue in absence of the |lookup code... Maybe you would have other ideas on how to narrow it down. Or maybe I'm just crazy        
Hi @AL3Z, You should go to default.xml at "Settings > User Interface > Navigation" menu while in your app. Use collection type for dropdown menus like below; Please update dropdown_menu_name1 and... See more...
Hi @AL3Z, You should go to default.xml at "Settings > User Interface > Navigation" menu while in your app. Use collection type for dropdown menus like below; Please update dropdown_menu_name1 and  dashboard_name1 with your needs. You can add more menus as you wish. <nav search_view="search" color="#1D1D1B"> <view name="search" default='true' /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> <collection label="dropdown_menu_name1"> <view name="dashboard_name1"</> </collection> <collection label="dropdown_menu_name2"> <view name="dashboard_name2"</> </collection> </nav>  
Hi @scelikok  Below is my configuration of filelog receiver inside the helm chart  {{- if and (eq (include "splunk-otel-collector.logsEnabled" .) "true") (eq .Values.logsEngine "otel") }} {{- ... See more...
Hi @scelikok  Below is my configuration of filelog receiver inside the helm chart  {{- if and (eq (include "splunk-otel-collector.logsEnabled" .) "true") (eq .Values.logsEngine "otel") }} {{- if .Values.logsCollection.containers.enabled }} filelog: {{- if .Values.isWindows }} include: ["C:\\var\\log\\pods\\*\\*\\*.log"] {{- else }} include: ["/var/log/pods/*/*/*.log"] {{- end }} # Exclude logs. The file format is # /var/log/pods/<namespace_name>_<pod_name>_<pod_uid>/<container_name>/<restart_count>.log exclude: {{- if .Values.logsCollection.containers.excludeAgentLogs }} {{- if .Values.isWindows }} - "C:\\var\\log\\pods\\{{ .Release.Namespace }}_{{ include "splunk-otel-collector.fullname" . }}*_*\\otel-collector\\*.log" {{- else }} - /var/log/pods/{{ .Release.Namespace }}_{{ include "splunk-otel-collector.fullname" . }}*_*/otel-collector/*.log {{- end }} {{- end }} {{- range $_, $excludePath := .Values.logsCollection.containers.excludePaths }} - {{ $excludePath }} {{- end }} start_at: beginning include_file_path: true include_file_name: false poll_interval: 200ms # Disable force flush until this issue is fixed: # https://github.com/open-telemetry/opentelemetry-log-collection/issues/292 retry_on_failure: enabled: true {{- end }} and also when I checked the config map start_at mapped with the beginning. Still it is not showing the older logs in Splunk 
Hi, Could anyone pls help me in adding the navigation menu to the dashboard like in the pic shown below eg. Event Search.   Thanks in advance  
Hi Everyone, I want to create a new Use case to detect Suspicious activity on insecure ports from remote to local and local to remote. I didn't understand how do I write the query as source IP/Des... See more...
Hi Everyone, I want to create a new Use case to detect Suspicious activity on insecure ports from remote to local and local to remote. I didn't understand how do I write the query as source IP/Destination IP as remote. Is there any way to define the "Context" like Remote and Local?  I want to define for L2R rule destination IP should be remote and for R2L Source IP should be Remote. I have tried with the reverse condition but it didn't worked properly.  Example: For L2R I have mentioned all the Local IP network segment as not category (Source IP!= 10.0.0.0/8) and for R2L vice versa (Destination IP!=10.0.0.0/8).    Can anyone help me with this please? 
Hi @gcusello, Option 1 is the smart solution without complicating it - its working perfectly fine! Thanks for the help. 
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf # For Windows systems only. # Does not use file handles [MonitorNoHandle://<path>] * This input intercepts file writes to the sp... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf # For Windows systems only. # Does not use file handles [MonitorNoHandle://<path>] * This input intercepts file writes to the specific file. * <path> must be a fully qualified path name to a specific file. Wildcards and directories are not accepted. * This input type does not function on *nix machines. * You can specify more than one stanza of this type. disabled = <boolean> * Whether or not the input is enabled. * Default: 0 (enabled) index = <string> * Specifies the index where this input sends the data. * This setting is optional. * Default: the default index I have no experience using this myself and only learned about this in the last week.  Since you said the UF was on a Windows client this could work for you.  This would not work with a UF on *nix environments.