All Topics

Top

All Topics

Have a index that is throwing up a warning, and the Root Cause says The newly created warm bucket size is too large. The bucket size=32630820864 exceeds the yellow_size_threshold=20971520000 from the... See more...
Have a index that is throwing up a warning, and the Root Cause says The newly created warm bucket size is too large. The bucket size=32630820864 exceeds the yellow_size_threshold=20971520000 from the latest_detected_index. This index was created just all the other indexes, and this one is the only one that is throwing the warning. And there has been at least 6 months of data be sent to this index, and it is saying there is only 14 days of data. What could be the issue with this index.
Hayyy Splunk Education Enthusiasts and the Eternally Curious!  Let's have some fun! The indexEducation newsletter takes an untraditional  twist on what’s new with Splunk training and education. We'r... See more...
Hayyy Splunk Education Enthusiasts and the Eternally Curious!  Let's have some fun! The indexEducation newsletter takes an untraditional  twist on what’s new with Splunk training and education. We're hoping this monthly read will quench your desire to learn, grow, and advance your career through our courses and technical training.  You can get the answer to Index This (from the subject line) and other 'Things You Needa' Know' by reading the January IndexEducation Newsletter.  We appreciate our community!  Thank you!   
I have a simple form that has a global search to set up the initial values of a time input.  With that global search, I also set a token for a label on my form. I'd like to update that label when ... See more...
I have a simple form that has a global search to set up the initial values of a time input.  With that global search, I also set a token for a label on my form. I'd like to update that label when a new value is chosen from the time input, but I cannot get it to work. Here is a full simple example to show what I mean.  If I change the time picker, I'd expect the label to be updated to reflect that change.       <form hideFilters="false"> <search id="starttimesearch"> <query> | makeresults | eval startHours=relative_time(now(), "@h-36h") | eval startTimeStr=strftime(startHours, "%B %d, %Y %H:%M") </query> <done> <set token="form.timeRange.earliest">$result.startHours$</set> <set token="form.timeRange.latest">now</set> <set token="time_label">Since $result.startTimeStr$</set> </done> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="timeRange" searchWhenChanged="true"> <label>Time</label> <default> </default> <change> <set token="time_change_start">strftime($timeRange.earliest$", "%B %d/%Y %H:%M")</set> <set token="time_change_end">strftime($timeRange.latest$", "%B %d/%Y %H:%M")</set> <eval token="time_label">case($timeRange.latest$ == now(), "Since $time_change_start$", 1==1, "From $time_change_start$ to %time_change_end$)</eval> </change> </input> </fieldset> <row> <panel> <html> The time label is $time_label$ </html> </panel> </row> </form>        
I am encountering the following error in the Gitlab Auditor TA when enabling an input. Does anyone know how to fix it?   Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-gitlab-audi... See more...
I am encountering the following error in the Gitlab Auditor TA when enabling an input. Does anyone know how to fix it?   Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 706, in urlopen chunked=chunked,]  File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 382, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 1010, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connection.py", line 421, in connect tls_in_tls=tls_in_tls, File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 878, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)
Hi All, My Splunk cloud is version 9.0.2208.4. My account role is sc_admin already.  I have around 200 alerts on the alert page. Is there a way to export the 200 alerts from the alert page wit... See more...
Hi All, My Splunk cloud is version 9.0.2208.4. My account role is sc_admin already.  I have around 200 alerts on the alert page. Is there a way to export the 200 alerts from the alert page with just one click? I am very new to Splunk, any help is appreciated! Thanks!    
I want to compare two index index1 and index2  and print values where index1 values does not exists in index2 fro ex: Index1. index2 field1.     field2   1                  1 2             ... See more...
I want to compare two index index1 and index2  and print values where index1 values does not exists in index2 fro ex: Index1. index2 field1.     field2   1                  1 2                  3 3                  4   output      2
My TA-nmon legacy is no longer working with Red Hat RHEL 8 O.S.   Am looking for advice/procedures on converting to TA-nmon Metricator.
Hi, I have a lookup table that contains a list of sessions with permitted time frames (start day & time / end day & time). I am looking for a way to run a scheduled search to remove any expired entr... See more...
Hi, I have a lookup table that contains a list of sessions with permitted time frames (start day & time / end day & time). I am looking for a way to run a scheduled search to remove any expired entries from the lookup table (e.g. sessions with end days / times that have passed). Can multiple entries be removed from a lookup table via a search? I know I can append to a lookup table but not sure about deletion.   Thanks!
Hello, The Subject pretty much says what I am looking for. I am new, 3 weeks in, to Dashboard Studio. One of the (many) functionalities(?) missing is the ability to show and hide visualizations. ... See more...
Hello, The Subject pretty much says what I am looking for. I am new, 3 weeks in, to Dashboard Studio. One of the (many) functionalities(?) missing is the ability to show and hide visualizations. Has anyone figured out a workaround or band-aid in the JSON, or some other override? Thanks in advance and God bless, Genesius
We are having issues with pan:firewall_cloud parser (which came with the Palo Alto Netowrks Add-on) not parsing logs from Cortex Data Lake. We are centralizing all of our SASE Prisma and Firewall log... See more...
We are having issues with pan:firewall_cloud parser (which came with the Palo Alto Netowrks Add-on) not parsing logs from Cortex Data Lake. We are centralizing all of our SASE Prisma and Firewall logs into the Cortex Data Lake and then streaming them from there to Splunk Cloud via the HEC. When I configure that HEC to use the Source Type of pan:firewall_cloud, which was recommended in the setup docs,  we don't get field extraction. When I use a standard _json parser it extracts all fields as expected. Is anyone else having this issue? Is there a fix? I can't use any of the Palo dashboards and there is no CIM normalization happening without that official Add-on parser working. 
Hello Splunkers I am pretty new to splunk admin .I have the following config set up in indexes.conf where I set up one day for hot buckets       [default] maxHotSpanSecs = 86400 [splunklogger]... See more...
Hello Splunkers I am pretty new to splunk admin .I have the following config set up in indexes.conf where I set up one day for hot buckets       [default] maxHotSpanSecs = 86400 [splunklogger] archiver.enableDataArchive = 0 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableDataIntegrityControl = 1 enableOnlineBucketRepair = 1 enableTsidxReduction = 0 metric.enableFloatingPointCompression = 1 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 tsidxWritingLevel =     But I'm not sure why it is chunking the data this way, according to the timestamp, this one is about every 4.5-5 hours.What changes should I do to the indexes.conf   root@login-prom4:/raid/splunk-var/lib/splunk/abc/db# du -sh ./* 4.0K ./CreationTime 756M ./db_1675137103_1675119933_1 756M ./db_1675154294_1675137102_2 849M ./db_1675171544_1675154293_3 750M ./hot_v1_0 617M ./hot_v1_4     Thanks in Advance 
We are trying to add users and receiving an error that states: In handler 'users': Could not get info for role that does not exist: winfra-admin   Does anyone have any ideas on why this is occu... See more...
We are trying to add users and receiving an error that states: In handler 'users': Could not get info for role that does not exist: winfra-admin   Does anyone have any ideas on why this is occurring and any suggestions on how to get around this so that we can add users?    We have Admin and winfra-admin assigned to us when looking at our assigned roles. 
Hello, Everyone We are excited to share with you… drop 4 of the Community Welcome Center, focused on connecting. After all, this is a community! You can connect with other community members right... See more...
Hello, Everyone We are excited to share with you… drop 4 of the Community Welcome Center, focused on connecting. After all, this is a community! You can connect with other community members right on this platform by using ‘@mentions’ in replies and comments. Or dm a Community member. Connecting people and content in your conversations As with other social media platforms, you can "@mention" other members to get their attention — whether you are asking for advice or want to bring them into the conversation. Read the detail on How do I “@mention” Community members and content in my posts? Sometimes conversations need to be private. This is where the platform's private message feature is important and valuable. As you may have seen, I use it a lot to welcome new members to the community.  Read the article, How do I send a private message? Where can I find the Welcome Center? Check out the Welcome Center for yourself, from the top navigation under Groups. Cheers,  Claudia Landivar and Ryan Paredez AppDynamics Community Managers   Related posts Introducing the Community Welcome Center Now in the Welcome Center: Search how-to articles What do you get from subscribing in the Community?   Explore the Welcome Center here
Hello, I have an application with an uf, an indexer and a sh. For a csv it is recommended to put some options in the uf and others in the indexer. For example the field_names. Do you know what types ... See more...
Hello, I have an application with an uf, an indexer and a sh. For a csv it is recommended to put some options in the uf and others in the indexer. For example the field_names. Do you know what types of options to put where?
My boss asked me to generate a report of people connecting to our network from public VPN providers.  I'm using this file  from github as a lookup table.  I added a column to make it a valid .csv.  T... See more...
My boss asked me to generate a report of people connecting to our network from public VPN providers.  I'm using this file  from github as a lookup table.  I added a column to make it a valid .csv.  The first couple of rows look like this: NetworkAddress,isvpn 1.12.32.0/23,1 1.14.0.0/15,1 I added my own IP address to confirm that the lookup was working.  It works if I add as the first row but not as the last row. Is there a row limit?  The file is only 425K, so I don't think I'm running into a file size limit, but it has 22682 rows.
I want to edit a dashboard table that shows current status of an application. The possible statuses are "Up", "Down", and "Warning". I'd like to display "Up" and "Warning" as a green and yellow check... See more...
I want to edit a dashboard table that shows current status of an application. The possible statuses are "Up", "Down", and "Warning". I'd like to display "Up" and "Warning" as a green and yellow checkmark respectively, and "Down" as a red circled "X".  Is this simple to do by editing the XML? The color part can be edited easily in dashboard options so that part is done but substituting the words with symbols is beyond me. I figure it will go something like: <format type="something" field="Status Now"> <something type="something">{"Up":#u2713, "Warning":#u2713, "Down":#u29BB} </something> </format> Not sure what to put in the "something" fields or if the formatting is correct.
Hello! I am caluclating utilization (already done), but I want to fix my event start times. The start time for a run on a machine is located in the filename, but I am having difficulty doing th... See more...
Hello! I am caluclating utilization (already done), but I want to fix my event start times. The start time for a run on a machine is located in the filename, but I am having difficulty doing the regrex command and understanding how it works. ex. Filename String:  013023-123141-46.xml Step1: Extract middle string (highlighted in red): 013023-123141-46.xml -->WANT:    "123141"  Step2: Add ":" between every other number (highlighted in red): "123141" --> Final string: "12:31:41" Step3: Convert time string "12:31:41" into a time stamp: Field: Starttime = strftime(Start_Time,"%h:%m:%s")
I've got a kvStore lookup, AD_Obj_user, defined with fields objectSid, OU, sAMAccountName, and others.  It has case-insensitive matching. I've got events that contain the field Sid.  I want to look... See more...
I've got a kvStore lookup, AD_Obj_user, defined with fields objectSid, OU, sAMAccountName, and others.  It has case-insensitive matching. I've got events that contain the field Sid.  I want to lookup the sAMAccountName and automate the lookup, but right now not even the manual lookup works. This works:       | inputlookup AD_Obj_User where objectSid=S-1-2-34-56789012-345678901-234567890-123456 | table objectSid sAMAccountName OU       but this does not work:       index=windows_client source="WinEventLog:PowerShell" Sid=S-1-2-34-56789012-345678901-234567890-123456 | lookup AD_Obj_User objectSid AS Sid | table OU Sid       I can do the lookup successfully, manually, by using this:       index=windows_client source="WinEventLog:PowerShell" Sid=S-1-2-34-56789012-345678901-234567890-123456 | eval objectSid=Sid | join type=left objectSid [| inputlookup AD_Obj_User | table objectSid sAMAccountName OU] | eval User=sAMAccountName | fields - sAMAccountName       but it won't get me towards automating the lookup. Any ideas?  I'm stumped.
I am sending IIS logs to SplunkCloud.  My inputs.conf looks like this:   [monitor://C:\inetpub\logs\LogFiles\W3SVC1] ignoreOlderThan = 7d sourcetype = web_log initCrcLength = 400 [monitor... See more...
I am sending IIS logs to SplunkCloud.  My inputs.conf looks like this:   [monitor://C:\inetpub\logs\LogFiles\W3SVC1] ignoreOlderThan = 7d sourcetype = web_log initCrcLength = 400 [monitor://C:\inetpub\wwwroot\merge\requestlogs\...\*.csv] ignoreOlderThan = 7d sourcetype = csv_webrequest crcSalt = <string> recursive = true initCrcLength = 400   It will work fine for a while, with SplunkCloud getting our data every second reliably as logs update.   The next day it will stop working, with log ingest slowing to a trickle: a few lines every few minutes. Restarting the forwarder occasionally works.  Making a different change can work (changing the initCrcLength, adding or removing crcSalt, adding or removing alwaysOpenFile) but nothing works for more than a day or so.   Does anyone have any suggestions? Thanks in advance.
Hello Splunk's community, I got some difficulty for the fields extraction in crowdsec's logs which are format with JSON (using the crowdsec plugin dedicated to this task). I know that there is a lo... See more...
Hello Splunk's community, I got some difficulty for the fields extraction in crowdsec's logs which are format with JSON (using the crowdsec plugin dedicated to this task). I know that there is a lot of post on this forum about json fields extraction but i didn't find any case that could helped me on this. Firstly here is a sample of an events:       { [-] capacity: 40 decisions: [ [-] { [-] duration: 4h origin: crowdsec scenario: crowdsecurity/http-crawl-non_statics scope: Ip type: ban value: confidential } ] events: [ [-] { [-] meta: [ [-] { [+] } { [+] } { [-] key: IsInEU value: true } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] timestamp: 2023-02-01T15:22:29+01:00 } { [+] } { [+] } { [+] } { [+] } { [+] } ] events_count: 52 labels: null leakspeed: 500ms machine_id: confidential-2@172.18.218.4 message: Ip confidential performed 'crowdsecurity/http-crawl-non_statics' (52 events over 22.814207421s) at 2023-02-01 14:22:29.975537808 +0000 UTC remediation: true scenario: crowdsecurity/http-crawl-non_statics scenario_hash: f0fa40870cdeea7b0da40b9f132e9c6de5e32d584334ec8a2d355faa35cde01c scenario_version: 0.3 simulated: false source: { [-] as_name: confidential as_number: confidential cn: FR ip: confidential latitude: confidential longitude: confidential range: 176.128.0.0/11 scope: Ip value: confidential } start_at: 2023-02-01T14:22:07.161331449Z stop_at: 2023-02-01T14:22:29.97553887Z        I successfully accessed to the fields under 'source' with something like (source.ip, source.as_name) but i can not find a solution for accessing to the value of a field in 'events.meta.IsInEU'. I tried different things with the spath command but unfortunately none of these things worked. I think that the issue is because the fields in meta do not have the same format as in source:       events: [ [-] { [-] meta: [ [-] { [+] } { [+] } {<shoud be a name here>: [-] key: IsInEU value: true }       As you can see above, i think that it would be much easier if there was a name here so i can access to the under key and value (events.meta.should_be_a_name_here.key|value). I don't know if there is some kind of index which i could put to access the data like events{}.meta{0}.key|value. Also i didn't expand the other fields that are aligned with meta because they're all named 'meta' and structure under them is the same than the one which you can see for the first one. The purpose for all of this would be to make operation such as 'stats count by <value of the key IsInEU' Thanks in advance for all your answers Best Regards