All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  Can anyone please help with this extracting stats count by two fields.  I've below data in each transaction type                status A                    200 B                    400 C   ... See more...
Hi  Can anyone please help with this extracting stats count by two fields.  I've below data in each transaction type                status A                    200 B                    400 C                    200 B                    200 A                    200 B                    400 A                    500 C                    300   I need stats in below format type              status           count A                    200                 2 A                   500                  1 B                    200                 1 B                   400                 2                                   C                 200                   1  C                 300                   1   
Hi everyone, Long story in short. I am planning to migrate our Splunk Cluster from public cloud to on-prem with all the old data existing in the cloud, but transfer them from local storage to smart... See more...
Hi everyone, Long story in short. I am planning to migrate our Splunk Cluster from public cloud to on-prem with all the old data existing in the cloud, but transfer them from local storage to smart store, the new data will be streaming to the on-prem cluster with all the configuration (index name, users, apps, reports, alerts, dashboard, etc) unchanged, and we will keep the minimum "in-cloud cluster" up and running until the data aged out. that's why we want to move the data from local storage to smart store for cost saving Now, I have two requirement: 1 rename the index name when data is migrated to smart store, this will be used in case we need to "hook up" it with our new on-prem cluster, so we need the index name to be different then their previous name. 2 we have a few indexes were configured "maxDataSize = auto_high_volume", from smartstore document, it seems that we can only use "maxDataSize=auto", even if we re-config this to "auto", it won't re-size the existing buckets from 10G to 750M, my question is is there any way for us to just move these bucket into the smartstore, the purpose for us is just to retain these data until they expire, there won't be active search on these data. Thank you
Hi All, We are using DB connect app to pull the DB logs. When we set interval as 5 mins (interval = */5 * * * *)  I could see some logs are missing. When we set the interval as 1 minute - I could ... See more...
Hi All, We are using DB connect app to pull the DB logs. When we set interval as 5 mins (interval = */5 * * * *)  I could see some logs are missing. When we set the interval as 1 minute - I could see more logs Why is it so? For example: Log count on 6th of October (with 1 minute interval) -- 521 Log count on 6th of October (with 5 minute interval) -- 119
I have the following address, and I want to extract the substring. Address: 121, riverstreet, sydney, Australia. I want to extract 'sydney'. Help would be highly appreciated.
Hi all, Does the Rubrik app support Token authentication yet? Tks Linh
As the title suggests, I am keen to know when ES license starts counting, from date of renewal or date of data ingestion ?
I have a UF on an rsyslog server. The UF is forwarding logs to the indexer successfully, but one of my two input flows is going to the wrong index, and I can't figure it out.  inputs.conf [monitor:... See more...
I have a UF on an rsyslog server. The UF is forwarding logs to the indexer successfully, but one of my two input flows is going to the wrong index, and I can't figure it out.  inputs.conf [monitor:///path/number/one/*] index = first_index sourcetype = first_index host_segment = 4 disabled = false [monitor:///path/number/two/*] index = second_index sourcetype = second_index host_segment = 4 disabled = false Data of sourcetype second_index makes it to the corresponding index, but data of sourcetype first_index ends up in the main index.  The only props and transforms I have configured are from the VMware add-on and its accessories, but I've scoured its conf files and have not found anything that would send this non-VMware data to main instead of where it belongs when it's specified in $SPLUNK_HOME/etc/system/local/inputs.conf. Any ideas? Thx!
Hello again Spelunkers!  So I have data that looks like this: assessment=normal [1.0] assessment=normal [1.1] assessment=suspect [0.75] assessment=suspect [0.88] assessment=bad [0.467] ... See more...
Hello again Spelunkers!  So I have data that looks like this: assessment=normal [1.0] assessment=normal [1.1] assessment=suspect [0.75] assessment=suspect [0.88] assessment=bad [0.467] I want a table column named rating that takes the "normal," "suspect," "bad" without the [###] after it. So I wrote the below thinking I can name the column rating and then capture any alpha characters and terminate at the white space between the word value and the [###] value. What would be the correct way of writing this? Thank you in advance!   | rex field=raw_ "assessment=(?<rating>/\w/\s)"  
Greetings All, I am very new to splunk and am creating a dashboard to show top non-compliances. For the below data, I want to display top non-compliant controls (example output also mentioned below)... See more...
Greetings All, I am very new to splunk and am creating a dashboard to show top non-compliances. For the below data, I want to display top non-compliant controls (example output also mentioned below) Could anyone please let me know how can I write a search query for the same? Thanks in advance.   Event_ID: abc1 Compliance_result: Non-Compliant Eval_results: { required_tags: { compliance: Compliant } encryption_enabled:{ compliance: Non-Compliant } public_access:{ compliance: Compliant } policy_enabled:{ compliance: Compliant } }   Event_ID: abc2 Compliance_result: Non-Compliant Eval_results: { required_tags: { compliance: Compliant } encryption_enabled:{ compliance: Non-Compliant } public_access:{ compliance: Non-Compliant } policy_enabled:{ compliance: Compliant } }   Generate Table in the below format -   Top Non Compliance controls: public_access - 2 encryption_enabled -1  
Hello,   Can i please know how to parse the value to the 2nd query from the output of 1st query. Any help would be appreciated.   1st query: index=<index_name>  sourcetype=<sourcetype_name> | ta... See more...
Hello,   Can i please know how to parse the value to the 2nd query from the output of 1st query. Any help would be appreciated.   1st query: index=<index_name>  sourcetype=<sourcetype_name> | table k8s_label | where k8s_label="id=<id_number>"   1st Query Output: name=peter project_id=123 user_id=2700835661 zone=us-west-2a   2nd Query: index=<index_name>  "server failed" Project_id=<need to get project_id  from the result of 1st query Output>     Thanks  
events are loaded with different currency from different countries and we are trying to have a view converting the currency into one currency.  We have uploaded CSV with average exchange rate per mon... See more...
events are loaded with different currency from different countries and we are trying to have a view converting the currency into one currency.  We have uploaded CSV with average exchange rate per month and would like to display a table using event date and use the rate from CSV, as the rates should be calculated as per the current rates and it should always change as we load new month rates    
I'm having a difficult time understanding what's the difference between the Content Pack for Unix Dashboards and Reporting and the Content Pack for Monitoring Unix and Linux. I see the the Content Pa... See more...
I'm having a difficult time understanding what's the difference between the Content Pack for Unix Dashboards and Reporting and the Content Pack for Monitoring Unix and Linux. I see the the Content Pack for Unix Dashboards and Reporting is available for use with IT Essentials Work and the Content Pack for Monitoring Unix and Linux is only available with ITSI. What are the differences in features between the two?
I have items visit log index with fields: category, item each event is a visit In addition, I have an index with all items in the system in form category, items_count I want to create a timechart o... See more...
I have items visit log index with fields: category, item each event is a visit In addition, I have an index with all items in the system in form category, items_count I want to create a timechart of categories: <category> -> <visited items>/<all items> other time What I did: index="visited" | eval cat_item = category."/".item | timechart dc(cat_item) by category | foreach * [ search index="cat" category="<<FIELD? >>" | eval <<FIELD>>= '<<FIELD>>'/items_count ] But this does not work timechart here creates a table with categories as columns and, each row contains the count of visited items  Now the problem is how I get column name, and value in the subquery. In the examples, the <<FIELD>> is used for the column name and column value alike.  Please help          
Hi Team, I want to extract aws-region from host name.  host= "my-service-name-.ip-101-99-126-252-us-west-2c".   I want to extract us-west-2 from the host. How I can achieve this.
I have a dropdown that has dynamic data, changes by the day, that I want filled in the dropdown for selection and use in the dashboard.  I've followed several entries from the community but the dropd... See more...
I have a dropdown that has dynamic data, changes by the day, that I want filled in the dropdown for selection and use in the dashboard.  I've followed several entries from the community but the dropdown is blank, only showing the ALL from the 'choice' entry.  Here is the SPL,   <fieldset submitButton="true">  <input type="dropdown" token="tok_site" searchWhenChanged="false">   <label>Site</label>   <search>    <query>earliest=-2h index=asset sourcetype=Armis:Asset                     | stats count by site.name    </query>   </search>   <choice value="*">ALL</choice>   <default>*</default>   <fieldForLabel>Site</fieldForLabel>   <fieldForValue>Site</fieldForValue>  </input> </fieldset> I will be adding a couple more dropdowns later, but they are dynamic as well.  If I can't get one to work, well.. Any suggestion on where I've made a mistake?
Hi Guys,         I have a scenario where i need to extract the file name from the event logs. The Event log first line looks like below. Event Log: [INFO] 2021-09-30T00:04:17.052Z 8d5eb00a-d033-4... See more...
Hi Guys,         I have a scenario where i need to extract the file name from the event logs. The Event log first line looks like below. Event Log: [INFO] 2021-09-30T00:04:17.052Z 8d5eb00a-d033-49a9-9d0f-c61011e4ae51 {"Records": [{"eventVersion": }]   Now i need to write a rex query to extract the file name "8d5eb00a-d033-49a9-9d0f-c61011e4ae51" from above event log. This file name changes for the every search query along with the timestamp.   Can someone suggest me how to resolve this?   Thanks.
Hi,     Actually I need a splunk to be deployed using helm chart can any one help me.     Thanks
hi i want to use sendmail spl command but it give me below error command="sendemail", (535, '5.7.3 Authentication unsuccessful') while sending mail to: myemail@mydomain.com   my spl command: inde... See more...
hi i want to use sendmail spl command but it give me below error command="sendemail", (535, '5.7.3 Authentication unsuccessful') while sending mail to: myemail@mydomain.com   my spl command: index=_internal | head 5 | sendemail to="myemail@mydomain.com" server=mail.server.com subject="Here is an email from Splunk" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true     FYI:email setting from web server configuration already set correctly, and i test it with alert and send email correctly, but when i use sendmail spl command not work!   also i check config file it set correctly: /opt/splunk/etc/system/local/alert_actions.conf   any idea? thanks,
Does anyone know the amount of time a universal forwarder takes to go and recheck the DNS entries of servers listed in the outputs.conf file. If the servers are listed by servername and not IP in th... See more...
Does anyone know the amount of time a universal forwarder takes to go and recheck the DNS entries of servers listed in the outputs.conf file. If the servers are listed by servername and not IP in the outputs file than Splunk would go out and check for the IP of those servers.  I know it does this at forwarder startup; but doe it also recheck periodically?   I am looking at a situation where the DNS entries for the backend servers get changed to new IPs (in a disaster recovery scenario) and want to know how long it would take the forwarders to start picking up on the new IPs (without having to go out and cycle the all of the forwarders that would talk to the indexers whose IPs got changed). Thanks.
Hello everyone, I have added an IP on local_intel_ip.csv and it now appears on Threat Artifact panel. The correlation search "Threat Activity Detected" is enabled with Adaptive Response Actions a No... See more...
Hello everyone, I have added an IP on local_intel_ip.csv and it now appears on Threat Artifact panel. The correlation search "Threat Activity Detected" is enabled with Adaptive Response Actions a Notable and Risk Analysis. A notable event was triggered with this IP as destination IP, but the aforementioned Notable (Threat Activity Detected) was never triggered.  Any idea on what I might have done wrong? Thank you in advance. Chris