All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am working on the saved search not index/lookup. I tried this code -  | eval date=strftime(strptime(<fieldname>,"%Y-%m-%d %H:%M:%S"), "%m-%d-%Y %H:%M:%S") but getting the blank data. Pls help  
Yes indexer clustering. I set up 3 win 10 machines with Splunk Enterprise on them and got them to initially connect to master indexer but then got this error. on same dns and firewall turned off on... See more...
Yes indexer clustering. I set up 3 win 10 machines with Splunk Enterprise on them and got them to initially connect to master indexer but then got this error. on same dns and firewall turned off on all 3 machines.   thanks    
Hello, I have an array of timeline event. Timeline: [ [-]        { [-]          deltaToStart: 788          startTime: 2023-02-01T21:56:11Z          type: service1        }        { [-]    ... See more...
Hello, I have an array of timeline event. Timeline: [ [-]        { [-]          deltaToStart: 788          startTime: 2023-02-01T21:56:11Z          type: service1        }        { [-]          deltaToStart: 653          startTime: 2023-02-01T21:56:11.135Z          type: service2        }      ] I would like to table deltaToStart value only of type service1.    Thanks.
Have a index that is throwing up a warning, and the Root Cause says The newly created warm bucket size is too large. The bucket size=32630820864 exceeds the yellow_size_threshold=20971520000 from the... See more...
Have a index that is throwing up a warning, and the Root Cause says The newly created warm bucket size is too large. The bucket size=32630820864 exceeds the yellow_size_threshold=20971520000 from the latest_detected_index. This index was created just all the other indexes, and this one is the only one that is throwing the warning. And there has been at least 6 months of data be sent to this index, and it is saying there is only 14 days of data. What could be the issue with this index.
I have a simple form that has a global search to set up the initial values of a time input.  With that global search, I also set a token for a label on my form. I'd like to update that label when ... See more...
I have a simple form that has a global search to set up the initial values of a time input.  With that global search, I also set a token for a label on my form. I'd like to update that label when a new value is chosen from the time input, but I cannot get it to work. Here is a full simple example to show what I mean.  If I change the time picker, I'd expect the label to be updated to reflect that change.       <form hideFilters="false"> <search id="starttimesearch"> <query> | makeresults | eval startHours=relative_time(now(), "@h-36h") | eval startTimeStr=strftime(startHours, "%B %d, %Y %H:%M") </query> <done> <set token="form.timeRange.earliest">$result.startHours$</set> <set token="form.timeRange.latest">now</set> <set token="time_label">Since $result.startTimeStr$</set> </done> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="timeRange" searchWhenChanged="true"> <label>Time</label> <default> </default> <change> <set token="time_change_start">strftime($timeRange.earliest$", "%B %d/%Y %H:%M")</set> <set token="time_change_end">strftime($timeRange.latest$", "%B %d/%Y %H:%M")</set> <eval token="time_label">case($timeRange.latest$ == now(), "Since $time_change_start$", 1==1, "From $time_change_start$ to %time_change_end$)</eval> </change> </input> </fieldset> <row> <panel> <html> The time label is $time_label$ </html> </panel> </row> </form>        
I am encountering the following error in the Gitlab Auditor TA when enabling an input. Does anyone know how to fix it?   Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-gitlab-audi... See more...
I am encountering the following error in the Gitlab Auditor TA when enabling an input. Does anyone know how to fix it?   Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 706, in urlopen chunked=chunked,]  File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 382, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 1010, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connection.py", line 421, in connect tls_in_tls=tls_in_tls, File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 878, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)
Hi All, My Splunk cloud is version 9.0.2208.4. My account role is sc_admin already.  I have around 200 alerts on the alert page. Is there a way to export the 200 alerts from the alert page wit... See more...
Hi All, My Splunk cloud is version 9.0.2208.4. My account role is sc_admin already.  I have around 200 alerts on the alert page. Is there a way to export the 200 alerts from the alert page with just one click? I am very new to Splunk, any help is appreciated! Thanks!    
I want to compare two index index1 and index2  and print values where index1 values does not exists in index2 fro ex: Index1. index2 field1.     field2   1                  1 2             ... See more...
I want to compare two index index1 and index2  and print values where index1 values does not exists in index2 fro ex: Index1. index2 field1.     field2   1                  1 2                  3 3                  4   output      2
My TA-nmon legacy is no longer working with Red Hat RHEL 8 O.S.   Am looking for advice/procedures on converting to TA-nmon Metricator.
Hi, I have a lookup table that contains a list of sessions with permitted time frames (start day & time / end day & time). I am looking for a way to run a scheduled search to remove any expired entr... See more...
Hi, I have a lookup table that contains a list of sessions with permitted time frames (start day & time / end day & time). I am looking for a way to run a scheduled search to remove any expired entries from the lookup table (e.g. sessions with end days / times that have passed). Can multiple entries be removed from a lookup table via a search? I know I can append to a lookup table but not sure about deletion.   Thanks!
Hello, The Subject pretty much says what I am looking for. I am new, 3 weeks in, to Dashboard Studio. One of the (many) functionalities(?) missing is the ability to show and hide visualizations. ... See more...
Hello, The Subject pretty much says what I am looking for. I am new, 3 weeks in, to Dashboard Studio. One of the (many) functionalities(?) missing is the ability to show and hide visualizations. Has anyone figured out a workaround or band-aid in the JSON, or some other override? Thanks in advance and God bless, Genesius
We are having issues with pan:firewall_cloud parser (which came with the Palo Alto Netowrks Add-on) not parsing logs from Cortex Data Lake. We are centralizing all of our SASE Prisma and Firewall log... See more...
We are having issues with pan:firewall_cloud parser (which came with the Palo Alto Netowrks Add-on) not parsing logs from Cortex Data Lake. We are centralizing all of our SASE Prisma and Firewall logs into the Cortex Data Lake and then streaming them from there to Splunk Cloud via the HEC. When I configure that HEC to use the Source Type of pan:firewall_cloud, which was recommended in the setup docs,  we don't get field extraction. When I use a standard _json parser it extracts all fields as expected. Is anyone else having this issue? Is there a fix? I can't use any of the Palo dashboards and there is no CIM normalization happening without that official Add-on parser working. 
Hello Splunkers I am pretty new to splunk admin .I have the following config set up in indexes.conf where I set up one day for hot buckets       [default] maxHotSpanSecs = 86400 [splunklogger]... See more...
Hello Splunkers I am pretty new to splunk admin .I have the following config set up in indexes.conf where I set up one day for hot buckets       [default] maxHotSpanSecs = 86400 [splunklogger] archiver.enableDataArchive = 0 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableDataIntegrityControl = 1 enableOnlineBucketRepair = 1 enableTsidxReduction = 0 metric.enableFloatingPointCompression = 1 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 tsidxWritingLevel =     But I'm not sure why it is chunking the data this way, according to the timestamp, this one is about every 4.5-5 hours.What changes should I do to the indexes.conf   root@login-prom4:/raid/splunk-var/lib/splunk/abc/db# du -sh ./* 4.0K ./CreationTime 756M ./db_1675137103_1675119933_1 756M ./db_1675154294_1675137102_2 849M ./db_1675171544_1675154293_3 750M ./hot_v1_0 617M ./hot_v1_4     Thanks in Advance 
We are trying to add users and receiving an error that states: In handler 'users': Could not get info for role that does not exist: winfra-admin   Does anyone have any ideas on why this is occu... See more...
We are trying to add users and receiving an error that states: In handler 'users': Could not get info for role that does not exist: winfra-admin   Does anyone have any ideas on why this is occurring and any suggestions on how to get around this so that we can add users?    We have Admin and winfra-admin assigned to us when looking at our assigned roles. 
Hello, I have an application with an uf, an indexer and a sh. For a csv it is recommended to put some options in the uf and others in the indexer. For example the field_names. Do you know what types ... See more...
Hello, I have an application with an uf, an indexer and a sh. For a csv it is recommended to put some options in the uf and others in the indexer. For example the field_names. Do you know what types of options to put where?
My boss asked me to generate a report of people connecting to our network from public VPN providers.  I'm using this file  from github as a lookup table.  I added a column to make it a valid .csv.  T... See more...
My boss asked me to generate a report of people connecting to our network from public VPN providers.  I'm using this file  from github as a lookup table.  I added a column to make it a valid .csv.  The first couple of rows look like this: NetworkAddress,isvpn 1.12.32.0/23,1 1.14.0.0/15,1 I added my own IP address to confirm that the lookup was working.  It works if I add as the first row but not as the last row. Is there a row limit?  The file is only 425K, so I don't think I'm running into a file size limit, but it has 22682 rows.
I want to edit a dashboard table that shows current status of an application. The possible statuses are "Up", "Down", and "Warning". I'd like to display "Up" and "Warning" as a green and yellow check... See more...
I want to edit a dashboard table that shows current status of an application. The possible statuses are "Up", "Down", and "Warning". I'd like to display "Up" and "Warning" as a green and yellow checkmark respectively, and "Down" as a red circled "X".  Is this simple to do by editing the XML? The color part can be edited easily in dashboard options so that part is done but substituting the words with symbols is beyond me. I figure it will go something like: <format type="something" field="Status Now"> <something type="something">{"Up":#u2713, "Warning":#u2713, "Down":#u29BB} </something> </format> Not sure what to put in the "something" fields or if the formatting is correct.
Hello! I am caluclating utilization (already done), but I want to fix my event start times. The start time for a run on a machine is located in the filename, but I am having difficulty doing th... See more...
Hello! I am caluclating utilization (already done), but I want to fix my event start times. The start time for a run on a machine is located in the filename, but I am having difficulty doing the regrex command and understanding how it works. ex. Filename String:  013023-123141-46.xml Step1: Extract middle string (highlighted in red): 013023-123141-46.xml -->WANT:    "123141"  Step2: Add ":" between every other number (highlighted in red): "123141" --> Final string: "12:31:41" Step3: Convert time string "12:31:41" into a time stamp: Field: Starttime = strftime(Start_Time,"%h:%m:%s")
I've got a kvStore lookup, AD_Obj_user, defined with fields objectSid, OU, sAMAccountName, and others.  It has case-insensitive matching. I've got events that contain the field Sid.  I want to look... See more...
I've got a kvStore lookup, AD_Obj_user, defined with fields objectSid, OU, sAMAccountName, and others.  It has case-insensitive matching. I've got events that contain the field Sid.  I want to lookup the sAMAccountName and automate the lookup, but right now not even the manual lookup works. This works:       | inputlookup AD_Obj_User where objectSid=S-1-2-34-56789012-345678901-234567890-123456 | table objectSid sAMAccountName OU       but this does not work:       index=windows_client source="WinEventLog:PowerShell" Sid=S-1-2-34-56789012-345678901-234567890-123456 | lookup AD_Obj_User objectSid AS Sid | table OU Sid       I can do the lookup successfully, manually, by using this:       index=windows_client source="WinEventLog:PowerShell" Sid=S-1-2-34-56789012-345678901-234567890-123456 | eval objectSid=Sid | join type=left objectSid [| inputlookup AD_Obj_User | table objectSid sAMAccountName OU] | eval User=sAMAccountName | fields - sAMAccountName       but it won't get me towards automating the lookup. Any ideas?  I'm stumped.
I am sending IIS logs to SplunkCloud.  My inputs.conf looks like this:   [monitor://C:\inetpub\logs\LogFiles\W3SVC1] ignoreOlderThan = 7d sourcetype = web_log initCrcLength = 400 [monitor... See more...
I am sending IIS logs to SplunkCloud.  My inputs.conf looks like this:   [monitor://C:\inetpub\logs\LogFiles\W3SVC1] ignoreOlderThan = 7d sourcetype = web_log initCrcLength = 400 [monitor://C:\inetpub\wwwroot\merge\requestlogs\...\*.csv] ignoreOlderThan = 7d sourcetype = csv_webrequest crcSalt = <string> recursive = true initCrcLength = 400   It will work fine for a while, with SplunkCloud getting our data every second reliably as logs update.   The next day it will stop working, with log ingest slowing to a trickle: a few lines every few minutes. Restarting the forwarder occasionally works.  Making a different change can work (changing the initCrcLength, adding or removing crcSalt, adding or removing alwaysOpenFile) but nothing works for more than a day or so.   Does anyone have any suggestions? Thanks in advance.