All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are having issues with pan:firewall_cloud parser (which came with the Palo Alto Netowrks Add-on) not parsing logs from Cortex Data Lake. We are centralizing all of our SASE Prisma and Firewall log... See more...
We are having issues with pan:firewall_cloud parser (which came with the Palo Alto Netowrks Add-on) not parsing logs from Cortex Data Lake. We are centralizing all of our SASE Prisma and Firewall logs into the Cortex Data Lake and then streaming them from there to Splunk Cloud via the HEC. When I configure that HEC to use the Source Type of pan:firewall_cloud, which was recommended in the setup docs,  we don't get field extraction. When I use a standard _json parser it extracts all fields as expected. Is anyone else having this issue? Is there a fix? I can't use any of the Palo dashboards and there is no CIM normalization happening without that official Add-on parser working. 
Hello Splunkers I am pretty new to splunk admin .I have the following config set up in indexes.conf where I set up one day for hot buckets       [default] maxHotSpanSecs = 86400 [splunklogger]... See more...
Hello Splunkers I am pretty new to splunk admin .I have the following config set up in indexes.conf where I set up one day for hot buckets       [default] maxHotSpanSecs = 86400 [splunklogger] archiver.enableDataArchive = 0 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableDataIntegrityControl = 1 enableOnlineBucketRepair = 1 enableTsidxReduction = 0 metric.enableFloatingPointCompression = 1 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 tsidxWritingLevel =     But I'm not sure why it is chunking the data this way, according to the timestamp, this one is about every 4.5-5 hours.What changes should I do to the indexes.conf   root@login-prom4:/raid/splunk-var/lib/splunk/abc/db# du -sh ./* 4.0K ./CreationTime 756M ./db_1675137103_1675119933_1 756M ./db_1675154294_1675137102_2 849M ./db_1675171544_1675154293_3 750M ./hot_v1_0 617M ./hot_v1_4     Thanks in Advance 
We are trying to add users and receiving an error that states: In handler 'users': Could not get info for role that does not exist: winfra-admin   Does anyone have any ideas on why this is occu... See more...
We are trying to add users and receiving an error that states: In handler 'users': Could not get info for role that does not exist: winfra-admin   Does anyone have any ideas on why this is occurring and any suggestions on how to get around this so that we can add users?    We have Admin and winfra-admin assigned to us when looking at our assigned roles. 
Hello, I have an application with an uf, an indexer and a sh. For a csv it is recommended to put some options in the uf and others in the indexer. For example the field_names. Do you know what types ... See more...
Hello, I have an application with an uf, an indexer and a sh. For a csv it is recommended to put some options in the uf and others in the indexer. For example the field_names. Do you know what types of options to put where?
My boss asked me to generate a report of people connecting to our network from public VPN providers.  I'm using this file  from github as a lookup table.  I added a column to make it a valid .csv.  T... See more...
My boss asked me to generate a report of people connecting to our network from public VPN providers.  I'm using this file  from github as a lookup table.  I added a column to make it a valid .csv.  The first couple of rows look like this: NetworkAddress,isvpn 1.12.32.0/23,1 1.14.0.0/15,1 I added my own IP address to confirm that the lookup was working.  It works if I add as the first row but not as the last row. Is there a row limit?  The file is only 425K, so I don't think I'm running into a file size limit, but it has 22682 rows.
I want to edit a dashboard table that shows current status of an application. The possible statuses are "Up", "Down", and "Warning". I'd like to display "Up" and "Warning" as a green and yellow check... See more...
I want to edit a dashboard table that shows current status of an application. The possible statuses are "Up", "Down", and "Warning". I'd like to display "Up" and "Warning" as a green and yellow checkmark respectively, and "Down" as a red circled "X".  Is this simple to do by editing the XML? The color part can be edited easily in dashboard options so that part is done but substituting the words with symbols is beyond me. I figure it will go something like: <format type="something" field="Status Now"> <something type="something">{"Up":#u2713, "Warning":#u2713, "Down":#u29BB} </something> </format> Not sure what to put in the "something" fields or if the formatting is correct.
Hello! I am caluclating utilization (already done), but I want to fix my event start times. The start time for a run on a machine is located in the filename, but I am having difficulty doing th... See more...
Hello! I am caluclating utilization (already done), but I want to fix my event start times. The start time for a run on a machine is located in the filename, but I am having difficulty doing the regrex command and understanding how it works. ex. Filename String:  013023-123141-46.xml Step1: Extract middle string (highlighted in red): 013023-123141-46.xml -->WANT:    "123141"  Step2: Add ":" between every other number (highlighted in red): "123141" --> Final string: "12:31:41" Step3: Convert time string "12:31:41" into a time stamp: Field: Starttime = strftime(Start_Time,"%h:%m:%s")
I've got a kvStore lookup, AD_Obj_user, defined with fields objectSid, OU, sAMAccountName, and others.  It has case-insensitive matching. I've got events that contain the field Sid.  I want to look... See more...
I've got a kvStore lookup, AD_Obj_user, defined with fields objectSid, OU, sAMAccountName, and others.  It has case-insensitive matching. I've got events that contain the field Sid.  I want to lookup the sAMAccountName and automate the lookup, but right now not even the manual lookup works. This works:       | inputlookup AD_Obj_User where objectSid=S-1-2-34-56789012-345678901-234567890-123456 | table objectSid sAMAccountName OU       but this does not work:       index=windows_client source="WinEventLog:PowerShell" Sid=S-1-2-34-56789012-345678901-234567890-123456 | lookup AD_Obj_User objectSid AS Sid | table OU Sid       I can do the lookup successfully, manually, by using this:       index=windows_client source="WinEventLog:PowerShell" Sid=S-1-2-34-56789012-345678901-234567890-123456 | eval objectSid=Sid | join type=left objectSid [| inputlookup AD_Obj_User | table objectSid sAMAccountName OU] | eval User=sAMAccountName | fields - sAMAccountName       but it won't get me towards automating the lookup. Any ideas?  I'm stumped.
I am sending IIS logs to SplunkCloud.  My inputs.conf looks like this:   [monitor://C:\inetpub\logs\LogFiles\W3SVC1] ignoreOlderThan = 7d sourcetype = web_log initCrcLength = 400 [monitor... See more...
I am sending IIS logs to SplunkCloud.  My inputs.conf looks like this:   [monitor://C:\inetpub\logs\LogFiles\W3SVC1] ignoreOlderThan = 7d sourcetype = web_log initCrcLength = 400 [monitor://C:\inetpub\wwwroot\merge\requestlogs\...\*.csv] ignoreOlderThan = 7d sourcetype = csv_webrequest crcSalt = <string> recursive = true initCrcLength = 400   It will work fine for a while, with SplunkCloud getting our data every second reliably as logs update.   The next day it will stop working, with log ingest slowing to a trickle: a few lines every few minutes. Restarting the forwarder occasionally works.  Making a different change can work (changing the initCrcLength, adding or removing crcSalt, adding or removing alwaysOpenFile) but nothing works for more than a day or so.   Does anyone have any suggestions? Thanks in advance.
Hello Splunk's community, I got some difficulty for the fields extraction in crowdsec's logs which are format with JSON (using the crowdsec plugin dedicated to this task). I know that there is a lo... See more...
Hello Splunk's community, I got some difficulty for the fields extraction in crowdsec's logs which are format with JSON (using the crowdsec plugin dedicated to this task). I know that there is a lot of post on this forum about json fields extraction but i didn't find any case that could helped me on this. Firstly here is a sample of an events:       { [-] capacity: 40 decisions: [ [-] { [-] duration: 4h origin: crowdsec scenario: crowdsecurity/http-crawl-non_statics scope: Ip type: ban value: confidential } ] events: [ [-] { [-] meta: [ [-] { [+] } { [+] } { [-] key: IsInEU value: true } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] timestamp: 2023-02-01T15:22:29+01:00 } { [+] } { [+] } { [+] } { [+] } { [+] } ] events_count: 52 labels: null leakspeed: 500ms machine_id: confidential-2@172.18.218.4 message: Ip confidential performed 'crowdsecurity/http-crawl-non_statics' (52 events over 22.814207421s) at 2023-02-01 14:22:29.975537808 +0000 UTC remediation: true scenario: crowdsecurity/http-crawl-non_statics scenario_hash: f0fa40870cdeea7b0da40b9f132e9c6de5e32d584334ec8a2d355faa35cde01c scenario_version: 0.3 simulated: false source: { [-] as_name: confidential as_number: confidential cn: FR ip: confidential latitude: confidential longitude: confidential range: 176.128.0.0/11 scope: Ip value: confidential } start_at: 2023-02-01T14:22:07.161331449Z stop_at: 2023-02-01T14:22:29.97553887Z        I successfully accessed to the fields under 'source' with something like (source.ip, source.as_name) but i can not find a solution for accessing to the value of a field in 'events.meta.IsInEU'. I tried different things with the spath command but unfortunately none of these things worked. I think that the issue is because the fields in meta do not have the same format as in source:       events: [ [-] { [-] meta: [ [-] { [+] } { [+] } {<shoud be a name here>: [-] key: IsInEU value: true }       As you can see above, i think that it would be much easier if there was a name here so i can access to the under key and value (events.meta.should_be_a_name_here.key|value). I don't know if there is some kind of index which i could put to access the data like events{}.meta{0}.key|value. Also i didn't expand the other fields that are aligned with meta because they're all named 'meta' and structure under them is the same than the one which you can see for the first one. The purpose for all of this would be to make operation such as 'stats count by <value of the key IsInEU' Thanks in advance for all your answers Best Regards
Hi, Is it possible to create a single health rule schedule for the below timeline? Mon - 5 PM - Tues 9 AM Tues 5 PM - Wed 9 AM Wed 5 PM - Thurs 9 AM Thurs 5 PM - Fri 9 AM Fri 5 PM -  M... See more...
Hi, Is it possible to create a single health rule schedule for the below timeline? Mon - 5 PM - Tues 9 AM Tues 5 PM - Wed 9 AM Wed 5 PM - Thurs 9 AM Thurs 5 PM - Fri 9 AM Fri 5 PM -  Mon 9 AM Basically, I need an alert schedule for Out of Business hours. Business hours schedule is (Mon - Fri 9 AM - 5 PM). Is it possible to create the above-mentioned schedule in a single schedule window?
Hi everyone, I'm a newbie to Splunk. I installed Splunk Enterprise in a Server which is connected to AD. Other machine, I have installed the Universal Forwarder. I have a admin account for AD and w... See more...
Hi everyone, I'm a newbie to Splunk. I installed Splunk Enterprise in a Server which is connected to AD. Other machine, I have installed the Universal Forwarder. I have a admin account for AD and with that I have installed the forwarder in other machine. I want to monitor all other logs from that machine. If try to collect the logs, it says that "Unable to get wmi classes from host 'xxxxx'. This host may not be reachable or WMI may be misconfigured".  I have followed the steps under Configure Active Directory for running Splunk software as a domain user in this page - Prepare your Windows network to run Splunk Enterprise as a network or domain user page. Is that something am I missing?, Also, I'm not sure on how to collect the remote logs.
Hello, I am trying to get regex to work in ingest actions to match a list of event codes from Window Security Logs.   The following regex matches sample text on regex101.com   ^(EventCode=(11... See more...
Hello, I am trying to get regex to work in ingest actions to match a list of event codes from Window Security Logs.   The following regex matches sample text on regex101.com   ^(EventCode=(1102|4616|4624|4625|4634|46484657|4697|4698|4699|4700|4701|4702|4719|4720|4722|4723|4725|4728|4732|4735|4737|4738|4740|4755|4756|4767|4772|4777|4782|4946|4947|4950|4954|4964|5025|5031|5152|5153|5155|5157|5447))$   But it doesn't find in matches when using in ingest actions. Given the eventcodes listed above, can someone assist me with finding the correct regex that will work inside of ingest actions?   Thanks!  
Hi. I have a dataset and one of the index columns is "X". I need to check whether or not this "X" feature is normally-distributed by plotting a histogram. I tried doing this but this doesn't ... See more...
Hi. I have a dataset and one of the index columns is "X". I need to check whether or not this "X" feature is normally-distributed by plotting a histogram. I tried doing this but this doesn't plot the actual actual values:   Can you please help?
My Aim : This below query gives me count of success, failure by b_key, c_key. I want to get the distinct count of b_key for which the failure occurred. In the example below it will be 2.   ... See more...
My Aim : This below query gives me count of success, failure by b_key, c_key. I want to get the distinct count of b_key for which the failure occurred. In the example below it will be 2.       | eval Complete = case(key_a="complete", "Complete") | eval Init = case(key_a="init" , "Init") | stats count(Init) as Init, count(Complete) as Complete by b_key, c_key | eval Fcount = if((Init != Complete),1,0) | eval Scount = if((Init = Complete),1,0) | stats sum(Fcount) as FailureCount, sum(Scount) as SuccessCount | eval total=(FailureCount+SuccessCount) | eval Success% = round(SuccessCount/total*100,2) | eval Failure% = round(FailureCount/total*100,2) | table FailureCount, SuccessCount, Success%, Failure%    
I am struggling to figure out how to get the Visualization that I want, if even possible.... Timechart works great for this purpose but only when having 1 By clause (aggregated on one value), so if... See more...
I am struggling to figure out how to get the Visualization that I want, if even possible.... Timechart works great for this purpose but only when having 1 By clause (aggregated on one value), so if I have understood it properly, I should use the Stats command which supports multiple aggregations. The end goal is to have one graph showing the following: Y-axle: Count of the events X-axle: Time Graph lines: One Graph line shows the Count for a unique combination of responseCode and Location OR possibly using Trellis (probably better) split By Location, so that each Location is a separate graph with one graph line showing the count for the responseCode. The Search as it is now:   <<SEARCH>> | bin _time as time span=15m | stats count by _time,body.records.properties.responseCode,body.records.location   If using Trellis split by Location, this results in two graphs, one per Location where each has one graph line for Count (no matter the response code) and one more graph line for the response code itself (i.e. response code 200 becomes a line on 200 of the Y-axle). But I want 1 single graph line showing the count per unique responseCode (the legend should display the responseCode (i.e. 200). Any ideas?
I have a dashboard showing website user journey data by reading various elements from a  log message.  Now the structure of logs has been changed in such a way I will have to change my queries to get... See more...
I have a dashboard showing website user journey data by reading various elements from a  log message.  Now the structure of logs has been changed in such a way I will have to change my queries to get same data elements.  Say the logs changed on 1st February and I want to use same dashboard to be able to see data before and after the change.  So my question is how do I use two queries, on same data source but applying first query before hardcoded time (e.g. 2023-02-01 00:00:00) and other after this time and join the records together to generate my stats. BTW, I also have a global date time picker which dictates how far back in time I perform the search     
I've made an app and put the app in "$Splunk_Home\etc\apps\app_name\local" where I have the outputs.conf file. Since there is no outputs.conf file in "$Splunk_Home\etc\system\local" I get an error me... See more...
I've made an app and put the app in "$Splunk_Home\etc\apps\app_name\local" where I have the outputs.conf file. Since there is no outputs.conf file in "$Splunk_Home\etc\system\local" I get an error message in the log stating: "LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf".  If I move the outputs.conf file from my app to "$Splunk_Home\etc\system\local" it will work. I've have already an old setup that I inherited where this is working. It seems like the file in my app is not read for some reason. I've checked that the user have read access to files in my app. Unfortunatly I don't have documentation from the old setup so I can't see how this was implemented. Are someone able to point me in the right direction? I've tried searching for this issue, but couldn't find anything related to this issue. Thanks in advance
Above is the title of my dashboard, need to add the present date along with the title   For the above one we need to add the event information mentioning the Success as Green, Running a... See more...
Above is the title of my dashboard, need to add the present date along with the title   For the above one we need to add the event information mentioning the Success as Green, Running as Blue, Error as Red, Wait as Yellow like below Along with the above we need to mention the total event details in the left side of the snippet
Hi Splunk friends, I'm using windows data for this example.  I want to collect in a time range of last 7 days, the numbers of hosts from my windows index with a span of 1d the result I am ex... See more...
Hi Splunk friends, I'm using windows data for this example.  I want to collect in a time range of last 7 days, the numbers of hosts from my windows index with a span of 1d the result I am expecting is that every day I can see in a timechart the total numbers of host on each day increases of decreases to do that I am using this search index=<windows Index>    Computer=XYZ* | dedup Computer | timechart count(Computer) as count span=1d The problem I am having is that the search never ends so only show a flat line and a peak from the last day.  I have around1000 host.  is there is a way to collect this data in a more efficient way?  Thank in advance.