All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

教えてください。 STARTとENDの時間範囲のあるCSVを作成し、その範囲内にあるイベントを数えたいのですが、どのようにクエリを書けばよいでしょうか <pre> started,completed 2020/10/2 08:00,2020/10/2 10:00 2020/10/2 11:00,2020/10/2 11:30 2020/10/2 12:00,2020/10/2 12:... See more...
教えてください。 STARTとENDの時間範囲のあるCSVを作成し、その範囲内にあるイベントを数えたいのですが、どのようにクエリを書けばよいでしょうか <pre> started,completed 2020/10/2 08:00,2020/10/2 10:00 2020/10/2 11:00,2020/10/2 11:30 2020/10/2 12:00,2020/10/2 12:30 2020/10/2 15:00,2020/10/2 16:00 2020/10/2 16:00,2020/10/2 18:00 2020/10/2 18:00,2020/10/2 19:00 2020/10/2 10:00,2020/10/2 22:00 </pre>
I'm trying to validate this search, but I'm getting this error: Error in 'tstats' command: This command must be the first command of a search. I don't know why I'm getting this error as it is the fi... See more...
I'm trying to validate this search, but I'm getting this error: Error in 'tstats' command: This command must be the first command of a search. I don't know why I'm getting this error as it is the first in the search: | tstats count as api_calls from datamodel=Change where All_Changes.user!=unknown All_Changes.status=success by All_Changes.user _time span=1h | `drop_dm_object_name("All_Changes")` | eval HourOfDay=strftime(_time, "%H") | eval HourOfDay=floor(HourOfDay/4)*4 | eval DayOfWeek=strftime(_time, "%w") | eval isWeekend=if(DayOfWeek >= 1 AND DayOfWeek <= 5, 0, 1) | table _time api_calls, user, HourOfDay, isWeekend | eventstats dc(api_calls) as api_calls by user, HourOfDay, isWeekend | where api_calls >= 1 | fit DensityFunction api_calls by "user,HourOfDay,isWeekend" into cloud_excessive_api_calls_v1 dist=norm show_density=true | tstats count as api_calls from datamodel=Change where All_Changes.user!=unknown All_Changes.status=success by All_Changes.user _time span=1h | `drop_dm_object_name("All_Changes")` | eval HourOfDay=strftime(_time, "%H") | eval HourOfDay=floor(HourOfDay/4)*4 | eval DayOfWeek=strftime(_time, "%w") | eval isWeekend=if(DayOfWeek >= 1 AND DayOfWeek <= 5, 0, 1) | table _time api_calls, user, HourOfDay, isWeekend | eventstats dc(api_calls) as api_calls by user, HourOfDay, isWeekend | where api_calls >= 1 | fit DensityFunction api_calls by "user,HourOfDay,isWeekend" into cloud_excessive_api_calls_v1 dist=norm show_density=true   Is this a bug or what? The search is produced by ESCU, I'm just making sure it works with the data we have
Why do Modular inputs like crowdstrike stream hang so often?
  Hi guys; I want to monitor a single file with a universal forwarder. It works perfectly till the size of the file reaches 250 MB. At that moment, when I open the file, new logs are there but t... See more...
  Hi guys; I want to monitor a single file with a universal forwarder. It works perfectly till the size of the file reaches 250 MB. At that moment, when I open the file, new logs are there but the size of the file not change. In this circumstance, Splunk UF can't sense the changes and send new logs to indexers, till I restart the UF! So is there any configuration that I missed? or any suggestion to solve this problem.   Tanks in advance.
I am trying to achieve a simple pie chart that will display from two different stats query command | inputlookup records.csv where condition1=compliant | stats count(host) as compliant | append rec... See more...
I am trying to achieve a simple pie chart that will display from two different stats query command | inputlookup records.csv where condition1=compliant | stats count(host) as compliant | append records.csv where (condition2=noncompliant AND condition3=noncompliant and condition4=noncompliant)| stats count(host) as noncompliant | <I am missing a command at this point in able to produce the pie chart below> Please advise. Thanks and regards.
Hello does anyone know if it is possible to search for policy type in Zscaler,  The Policy Type is outputs via cvs as  Advanced Threat Protection None Advanced threat Protection being the paid ... See more...
Hello does anyone know if it is possible to search for policy type in Zscaler,  The Policy Type is outputs via cvs as  Advanced Threat Protection None Advanced threat Protection being the paid for subscription and None being the basic ATP feed  Thanks  Dave
Hi, I am trying to create a dashboard to show the availability of a service. To get this, im picking the calls per min and errors per min metric. But since there are no errors, errors per min graph ... See more...
Hi, I am trying to create a dashboard to show the availability of a service. To get this, im picking the calls per min and errors per min metric. But since there are no errors, errors per min graph is showing as no data. But when I use it in an expression: ({calls} - ({errors} + 0)) * 100 / ({calls} + 1), though the calls is not 0, it still shows the graph as unavailable.  Is there a way to set a static value when there is no data? Since in this case, the availability should be 100%, since there are no errors and we have calls. 
Hey All, Was just curious if there is a way to calculate how long it should take to thaw\rebuild frozen buckets for searching? We recently started a thaw of a month's worth of data of Windows event... See more...
Hey All, Was just curious if there is a way to calculate how long it should take to thaw\rebuild frozen buckets for searching? We recently started a thaw of a month's worth of data of Windows events that is taking a considerable amount of time. Going on 5 hours now across 3 clustered IDX's.  We have 6 total IDX's clustered and we perform the thaw\rebuild on 3 at a time to ensure we can balance data ingestion and the thaw process. I would love to be able to set expectations when other business units ask for historical data to be loaded, on how long it will take to be ready.  I'm sure it depends on CPU\HDD speed\bucket size but was curious if anyone has a way to determine some rough calculations. Thanks! Andrew
We've been following the documentation to the letter (this documentation) but when the report has run, and is available in Splunk, the embedded report we have in our web page built via Google Sites s... See more...
We've been following the documentation to the letter (this documentation) but when the report has run, and is available in Splunk, the embedded report we have in our web page built via Google Sites shows an empty frame with a red exclamation point and the phrase "Report not available".  As for the scheduling, it is a quick search that takes roughly 30 seconds to run, and is scheduled to run every hour. We have tried accessing the embedded report via multiple browsers to rule that out as an issue and they all show the same frame content. In case this makes a difference, we are on Splunk Cloud 8.1.2009
Hello Splunk Experts, My organization has splunk cloud and enterprise security.  I was wondering if Splunk is capable of acting as a stix/taxii client so that I can enroll with a threat intelligenc... See more...
Hello Splunk Experts, My organization has splunk cloud and enterprise security.  I was wondering if Splunk is capable of acting as a stix/taxii client so that I can enroll with a threat intelligence provider and have those feeds come directly into splunk.  I know splunk has a way for me to upload stix/taxii files but that's manual.   
Hello All, I hope you all are doing well. I have a situation wherein i have to pass current day value (Sun, Mon, Tue etc) in regex dynamically to capture a value  associated which i have in lookup ... See more...
Hello All, I hope you all are doing well. I have a situation wherein i have to pass current day value (Sun, Mon, Tue etc) in regex dynamically to capture a value  associated which i have in lookup for that day. I have a lookup, maintenance.csv with below fields. host; maintenance_days host1; Sun=1, Mon=2, Tue=3 and so on   What i want is, depending on the day on which my search is ran, it should fetch value the corresponding value of the day. For example, if my search runs on Mon, it should return 2, if it runs on Tue, it should return 3 etc.   I thought i can do this by calculating the day on the search time and passing this as variable in my regex and extracting the value for the day (1, 2, 3 etc) by using fields in rex command but its not working.   Search: | inputlookup "maintenance.csv" | eval date_wday=strftime(strptime(now(),"%d/%m/%Y"),"%a") | rex field=maintenance_days "date_wday\=(?P<mday>[^,])"   What i need is, if above search is run on "Mon", then regex in search becomes, "| rex field=maintenance_days "Mon\=(?P<mday>[^,])"". If it runs on Wednesday, then it becomes "| rex field=maintenance_days "Wed\=(?P<mday>[^,])"" etc. I have tried $date_way$ instead of date_wday but it didnt worked. I have tried putting "| rex field=maintenance_days "date_wday\=(?P<mday>[^,])"" inside a macro and passing "date_wday" as argument, but it again took it as a string instead of field value associated with it. I did had some sucess in passing field value via map command but i am just wondering if there is any nicer way of doing this.  
I have a lookup table which consists of src_ip. This source Ip has mix of Ips in the format: Src_ip 163.74.7.212 163.74.13.57 67.75.175.32/27  68.143.151.125/26    I need to matc... See more...
I have a lookup table which consists of src_ip. This source Ip has mix of Ips in the format: Src_ip 163.74.7.212 163.74.13.57 67.75.175.32/27  68.143.151.125/26    I need to match this lookup table to my search which consists of the field src_ip in my data. But how do i do that since it is a mix of cidr and normal ips? My actual data for src_ip doesnt consits of cidr ips. Can someone let me know ?
Hello,   Noticed my Indexer was down and I could not sign in.  Went to restart splunk as sudo then root user and got this error: Splunk is unable to write to the directory /opt/splunk and there... See more...
Hello,   Noticed my Indexer was down and I could not sign in.  Went to restart splunk as sudo then root user and got this error: Splunk is unable to write to the directory /opt/splunk and therefore will not run. Please check for appropriate permissions on this directory and its contents as necessary. Checked what I could, I cannot come up with a solution.  Did not find much on the Internet.  Please help.
Hello, I currently have a Splunk Enterprise running version 6.5.0 and want to upgrade - looking at the upgrade path, I need to jump to 6.6.x before 7.2.x, then finally 8.1. However - I can't find v... See more...
Hello, I currently have a Splunk Enterprise running version 6.5.0 and want to upgrade - looking at the upgrade path, I need to jump to 6.6.x before 7.2.x, then finally 8.1. However - I can't find version 6.6.x to download on Splunk's website, the eariest it goes back to is 7.2.0: https://www.splunk.com/en_us/download/previous-releases.html I've sent a message via their contacts link, but no response as yet - does anyone else have a link to download 6.6.x version? Thanks - Phil
Hi I moved the Splunk server(ver.8.0.2 stand alone) into a new instance by copying the entire setup(/opt/splunk). Both are running in Centos7 but the issue is when I started the server in a new inst... See more...
Hi I moved the Splunk server(ver.8.0.2 stand alone) into a new instance by copying the entire setup(/opt/splunk). Both are running in Centos7 but the issue is when I started the server in a new instance all the data(till yesterday) got removed from the indexes and the server contains only today's data.  What could be wrong? I moved the entire setup. Need your help here. Or at least how to restore from /opt/splunk/var/lib/.   Thanks,  
Hello, I have two time pickers that I use in a dashboard. One of them sets time for whole dashboard, second one should set time for one table. Usage should be something like this: On page load, se... See more...
Hello, I have two time pickers that I use in a dashboard. One of them sets time for whole dashboard, second one should set time for one table. Usage should be something like this: On page load, second time picker initializes with same time as the first one. On main time picker change, second time picker gets the value of main. When second time picker is changed, that one table gets it's value, other panels remain with main time picker. My code looks like this:   <input type="time" token="time_picker" searchWhenChanged="true"> <label></label> <default> <earliest>0</earliest> <latest></latest> </default> <change> <set token="time_picker_2.earliest">$time_picker.earliest$</set> <set token="time_picker_2.latest">$time_picker.latest$</set> </change> </input> <input type="time" token="time_picker_2" searchWhenChanged="true"> <label></label> <default> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> </default> </input>   This code somehow works. If I change main time range, token for second time range updates and recreates the table. But - UI time range remains unchanged, so when main picker is changed from All time to Last 30 days, table data are from that time range, but second time picker shows from still as 1/1/1970. Any ideas how to make time picker UI update? Thank you.
Hi, I have seen this new feature in Splunk 8.1.1 https://docs.splunk.com/Documentation/Splunk/8.1.1/ReleaseNotes/MeetSplunk Enhanced TSIDX compression Enhanced TSIDX compression for improved p... See more...
Hi, I have seen this new feature in Splunk 8.1.1 https://docs.splunk.com/Documentation/Splunk/8.1.1/ReleaseNotes/MeetSplunk Enhanced TSIDX compression Enhanced TSIDX compression for improved performance and up to 40% reduced storage.   I was just wondering if this feature is related to the tsidxWritingLevel as mentioned here: https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/Reducetsidxdiskusage#The_tsidx_writing_level Splunk Enterprise Version The tsidxWritingLevel supported The supported tsidx level for searching 7.2.x 1, 2 1, 2 7.3.x 1, 2, 3 1, 2, 3 8.0.x 1, 2, 3 1, 2, 3 8.1.x 1, 2, 3, 4 1, 2, 3, 4   that can be set-up in indexes.conf as follow: tsidxWritingLevel = [1|2|3|4] * Enables various performance and space-saving improvements for tsidx files. * For deployments that do not have multi-site index clustering enabled, set this to the highest value possible for all your indexes. * For deployments that have multi-site index clustering, only set this to the highest level possible AFTER all your indexers in the cluster have been upgraded to the latest code level. * Do not configure indexers with different values for 'tsidxWritingLevel' as downlevel indexers cannot read tsidx files created from uplevel peers. * The higher settings take advantage of newer tsidx file formats for metrics and log events that decrease storage cost and increase performance * Default: 1   Moreover, is there anyone having some (good) feedback after have enabled it? Thanks a lot, Edoardo
Hi Team, I have one requirement. I need to extract one field from the event. Below are my events. L=Phoenix, ST=Arizona, C=US>) GET https://lpdosputb50090.phx.vxp.com:9091/api/flow/process-groups/... See more...
Hi Team, I have one requirement. I need to extract one field from the event. Below are my events. L=Phoenix, ST=Arizona, C=US>) GET https://lpdosputb50090.phx.vxp.com:9091/api/flow/process-groups/7 L=Phoenix, ST=Arizona, C=US>) GET https://lpdosputb50090.phx.vxp.com:9091/api/flow/process-groups L=Phoenix, ST=Arizona, C=US>) PUT https://lpdosputb50090.phx.Vxp.com:9091/api/flow/process-groups/7c L=Phoenix, ST=Arizona, C=US>) POST https://lpdosputb50087.phx.vxp.com:9091/api/flow/process-groups/ I want to extract the word that I have highlighted. Can someone provide me regex for that.
Hi, I have some syslog logs and for a field those possible values: myfield=sjhfshgfjwes\ myfield=abah\ myfield=dshaese\   I have a query tho show the result in a column chart: index=myindex si... See more...
Hi, I have some syslog logs and for a field those possible values: myfield=sjhfshgfjwes\ myfield=abah\ myfield=dshaese\   I have a query tho show the result in a column chart: index=myindex site=* myfield=* | chart count by site myfield   When I click on a bar, I have another dashboard that shows some informations for the specific myfield, but I have an error  "Unbalanced quotes", I think for the final \. How can I remove \ at the end of the field values?   Thank you in advance
Hi @all,   i have following string which i want to break into there fields: service_name, host and port_id metics-ha-5924590-011.aba.corp.tyos.com:2002 service_name = metics host = ha-592459... See more...
Hi @all,   i have following string which i want to break into there fields: service_name, host and port_id metics-ha-5924590-011.aba.corp.tyos.com:2002 service_name = metics host = ha-5924590-011.aba.corp.tyos.com port = 2002 anyone please help me, how can i do it in splunk query. Thanks !!