All Topics

Top

All Topics

Hi, I'm trying to use the PREFIX directive in TSTATS (here : https://docs.splunk.com/Documentation/Splunk/9.1.0/SearchReference/Tstats#Use_PREFIX.28.29_to_aggregate_or_group_by_raw_tokens_in_indexed... See more...
Hi, I'm trying to use the PREFIX directive in TSTATS (here : https://docs.splunk.com/Documentation/Splunk/9.1.0/SearchReference/Tstats#Use_PREFIX.28.29_to_aggregate_or_group_by_raw_tokens_in_indexed_data). In the docs, it says that it can work with data that does not contain major breakers such as spaces. My data contains spaces so I decided to try to change the major breakers this way: props.conf: [test_sourcetype] SEGMENTATION = test_segments segmenters.conf: [test_segments] MAJOR = \t MINOR = / : = @ . - $ # % \\ _ [ ] < > ( ) { } | ! ; , ' " * \n \r \s & ? + %21 %26 %2526 %3B %7C %20 %2B %3D -- %2520 %5D %5B %3A %0A %2C %28 %29 This way, only the tab (\t) is considered as a major breaker. I applied this, restarted and tried to ingest a line of log with the sourcetype "test_sourcetype". Unfortunately, it seems the segmenters.conf does not work because it keeps breaking with a space for example. I also tried to remove all MINOR and keep only MAJOR, but no luck: MAJOR = \t MINOR =   Have I made a mistake? Is it possible to do what I want? I think so because in this .conf presentation (https://conf.splunk.com/files/2020/slides/PLA1089C.pdf) they mention it briefly (page 37). Should I also use  SEGMENTATION-<segment selection> = <segmenter> in props.conf ? The docs says it is for SplunkWeb but I am considering all options... Thanks
Hello In Splunk cloud monitoring console, there is a panel with  Restored searchable storage (DDAS) usage is it possible to search for detailed information such as which index was restored and t... See more...
Hello In Splunk cloud monitoring console, there is a panel with  Restored searchable storage (DDAS) usage is it possible to search for detailed information such as which index was restored and the size of each index ? In the console it shows only the total size of the restored data   thanks
L.s., Is it possible for a heavy forwarder to clone the data to a 9997/tcp output (S2S) and a 8088/tcp httpout (HEC)? So both will recieve the same events. We have a heavy wich has to send the da... See more...
L.s., Is it possible for a heavy forwarder to clone the data to a 9997/tcp output (S2S) and a 8088/tcp httpout (HEC)? So both will recieve the same events. We have a heavy wich has to send the data to two clusters.  One of these clusters we want the data to be recieved by HEC, the other only has S2S. thanks in advance Grts   Jari
Hello friends,   I'm fairly new to Splunk, so please bear with me here.   I have the output of the sar -u command on a solaris server. in the format:   Timestamp %usr %sys %wi... See more...
Hello friends,   I'm fairly new to Splunk, so please bear with me here.   I have the output of the sar -u command on a solaris server. in the format:   Timestamp %usr %sys %wio %idle %cpu   now i was able to create a line graph outputting all five values, but as soon as i take away even one of the categories, i only get timestamps but no other value. how can i specifically search to output only the cpu value as average in either a bar chart or filler gauge?   Thanks for reading. Best, Denipon 
On my deployment server, when running btool check against inputs.conf and 'grep'ing for the name of my manually created app (which has nothing but a local directory, an inputs.conf and an automatical... See more...
On my deployment server, when running btool check against inputs.conf and 'grep'ing for the name of my manually created app (which has nothing but a local directory, an inputs.conf and an automatically created app.conf file) I have a 'Invalid key in stanza [monitor]...' which complains about a line where I have; index = indexName And another error about; sourcetype = sourcetypeName I don't understand why Splunk doesn't like these lines. I can't find an appropriate inputs.conf.spec file where the issue could be fixed, but maybe I am not looking in correct place. When I run a btool check against all of our .conf files and Splunk is reporting that fields such as index, source, sourcetype, crcSalt, initCrcLength and more are invalid stanzas.   We have hundreds of such ‘invalid key’ errors. We also have hundreds of errors for “No spec file for:” for all .conf files other than inputs.conf – no such errors for inputs.conf. Maybe something major (or minor with major implications) went wrong after an upgrade?
Hello,   I am trying to automate the Splunk Enterprise installation however when I create the authentication.conf at deployment time. Visiting the URL only gives me a XML error. However If I manu... See more...
Hello,   I am trying to automate the Splunk Enterprise installation however when I create the authentication.conf at deployment time. Visiting the URL only gives me a XML error. However If I manually log on after removing the authentication.conf file and upload the XML LDP file it works. I originally thought the authentication.conf was the output of the XML upload. There is approximately 80-100 extra files in the splunk directory. Could someone point me in the direction of automating this XML upload part of the process Thank you
Hi , I am trying to make a search only if the values of lookup table i.e  groups.csv   fields  username  matches with the   username in the below search it should raise an alert . index=foo sourcet... See more...
Hi , I am trying to make a search only if the values of lookup table i.e  groups.csv   fields  username  matches with the   username in the below search it should raise an alert . index=foo sourcetype=WinEventLog | stats values(username) as username, values(Target_Domain) as Domain by userid Thanks
We have an index, say 'index1' that has log retention upto 7 days. As the log volume is huge, we dont want to retain all logs there for more than 7 days. However, there is also requirement to reta... See more...
We have an index, say 'index1' that has log retention upto 7 days. As the log volume is huge, we dont want to retain all logs there for more than 7 days. However, there is also requirement to retain some logs for later use, say some errors logs that we want to inspect later. So the solution we though of is to use 'collect' and have it in a separate index say 'index2' which has a greater retention, say 6 months. So I planned on using the below way for this   index=index1 level=ERROR | collect index=index2 output_format=hec   *using output_format=hec because we want to use exact same source and source type to have the field extractors working exactly like the original index However there are some questions I have with this. 1. Does this method use license? The doc says below statements which is kind of confusing. a)Allows the source, sourcetype, and host from the original data to be used directly in the summary index. b)No license is counted for the internal stash source type. License is counted when the original source type is used instead of stash in output_mode=hec https://docs.splunk.com/Documentation/Splunk/9.0.2/SearchReference/Collect 2. The document also mentions "This command is considered risky because, if used incorrectly, it can pose a security risk or potentially lose data when it runs" But its not clear how is it risky and what are the things to make sure to avoid problems? 3. It looks like 'collect' command can be used by any user. I tried removing 'run_collect' capability and it also doesnt prevent a role from using collect. How to only allow certain roles to use 'collect' command? 4. The collect command is basically writing to an index. Is there a way to restrict a role from writing data to an index using 'collect' or any other command?
Hi there, Can a frozen bucket be an excess bucket ? Additional Context: Multisite cluster, Splunk enterprise V8.1.5 Regards, Shashwat
Hi , Im trying to extract distinct email is as column and preparing some counts .For this im thinking to extract the email data from log field . Can someone please provide pointers { "log": " \u001b... See more...
Hi , Im trying to extract distinct email is as column and preparing some counts .For this im thinking to extract the email data from log field . Can someone please provide pointers { "log": " \u001b[2m2023-08-09 21:28:28.347\u001b[0;39m \u001b[32mDEBUG\u001b[0;39m \u001b[35m1\u001b[0;39m \u001b[2m---\u001b[0;39m \u001b[2m[nio-8080-exec-7]\u001b[0;39m \u001b[36ms.s.w.c.SecurityContextPersistenceFilter\u001b[0;39m \u001b[2m:\u001b[0;39m Set SecurityContextHolder to SecurityContextImpl [Authentication= SCOPE_profile1]], User Attributes: [{ email=venkatanaresh.mokka@one.verizon.com}], Credentials=[PROTECTED] ]]\n", "stream": "stdout", "kubernetes": { "container_name": "draftx-ui-gateway", } }
We have many alerts setup in Splunk, so how can I get the list of alerts corn scheduled for 10mins   
Only downloads I see listed are the unprivileged tgz files and when I run the install I get the following error: Error: Box being upgraded has installation type priv but this installer has type unp... See more...
Only downloads I see listed are the unprivileged tgz files and when I run the install I get the following error: Error: Box being upgraded has installation type priv but this installer has type unpriv. Please visit the release page and download the priv installer variant. Where can the privileged tgz files be downloaded? If the privileged tgz files are unavailable should I run the unprivileged installer with the --ignore-warnings flag? Thanks in advance for all the help!
Hello Splunk Community, I'm encountering an issue with my search queries in Splunk that I hope someone can help me with. When I run a search, Splunk often indicates that a subset of events has match... See more...
Hello Splunk Community, I'm encountering an issue with my search queries in Splunk that I hope someone can help me with. When I run a search, Splunk often indicates that a subset of events has matched (e.g., 2 of 10,000 events matched), but the "Events" panel only shows the count in brackets and does not display the actual results. The main concern here is that these long-running queries frequently fail, and no data is returned at all. This is particularly frustrating when I know that some events have already matched. What I'm looking for is a way to have Splunk return the matched events as they are found, without waiting for the entire search to be completed. In other words, if 2 events have matched, I'd like to see those 2 events immediately, even if the search is still ongoing. Is there a configuration or query modification that would allow this behavior? Any guidance or insights would be greatly appreciated. Thank you in advance for your assistance! I have also attached a screenshot for reference.
Can you leverage the total derived using the addcoltotals command to support other calculations? i.e. can you use it to calculate a percentage?  | addcoltotals count labelfield="total" | eval perce... See more...
Can you leverage the total derived using the addcoltotals command to support other calculations? i.e. can you use it to calculate a percentage?  | addcoltotals count labelfield="total" | eval percent=((count/total)*100) | table host count percent    
I am trying to do a tstats command to get the last logged time a server has sent logs.  My server list i want in the table is in a lookup csv. The command i am using is  Tstats latest(_time) as l... See more...
I am trying to do a tstats command to get the last logged time a server has sent logs.  My server list i want in the table is in a lookup csv. The command i am using is  Tstats latest(_time) as lastseen where (index=windows) by host | convert ctime(lastseen) The "where" clause i would like to be something like "where the server name is on the lookup table"   Basically trying to filter the output of the query to just any server i have in the lookup table
Hi all. I’m kind of new to Splunk. I have data by day - this is the response time for each API call by day. I want to run that automatically every day, collecting it into a summary index. (I cannot r... See more...
Hi all. I’m kind of new to Splunk. I have data by day - this is the response time for each API call by day. I want to run that automatically every day, collecting it into a summary index. (I cannot run this by month since it is too much data). Then, every month, I want to use the summary index to calculate the 95th percentile, average, stan dev, of all the response times by each API call. The summary index will allow me to do that faster. Although I am not sure of the mechanics on how to use. For instance, do I need to readd my filters for the monthly pull? Does the below so far look correct to pull in all information (events)? So, I want to understand if I am doing this correctly. I have the below SPL by day: index=virt [other search parameters] | rename msg.sessionId as sessionId | rename msg.apiName as apiName | rename msg.processingTime as processingTime | rename msg.responseCode as responseCode | eval session_id= coalesce(a_session_id, sessionId) | fields … | stats values(a_api_responsetime) as responsetime, values(processingTime) as BackRT by session_id | eval PlatformProcessingTime = (responsetime - BackRT) | where PlatformProcessingTime>0 | collect index=virt_summary Then I have the below SPL by month: index=virt_summary | bucket _time span=1mon | stats count as Events, avg(PlatformProcessingTime), stdev(PlatformProcessingTime), perc95(PlatformProcessingTime) by _time  Any assistance is much appreciated! Let me know if you need more clarification. The results are what I have attached, so it looks like it is not working properly. I tested the results by day. 
how to get the total each indexer volume size utilization in the indexer cluster of 10. i have the cluster manager with 10 indexers and like to know is there a way to query from the CM or the dashbo... See more...
how to get the total each indexer volume size utilization in the indexer cluster of 10. i have the cluster manager with 10 indexers and like to know is there a way to query from the CM or the dashboard view the volume utilization of each indexer. we don't have the distribution monitoring console setup yet. we have 1 SH cluster with 5 SH 1CM 10 indexers 1 deployer to manage SH cluster
Hi Folks,  Is there any way to bulk edit many alerts/reports or dashboards in Splunk Cloud? For example, I'm planning to edit the index of many alerts at the same time but I cannot find an option... See more...
Hi Folks,  Is there any way to bulk edit many alerts/reports or dashboards in Splunk Cloud? For example, I'm planning to edit the index of many alerts at the same time but I cannot find an option to do this in bulk. Any insight will be appreciated. Thanks in advance!
Register here. This thread is for the Community Office Hours session on Splunk Enterprise Security (ES) on Wed, October 25, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions rel... See more...
Register here. This thread is for the Community Office Hours session on Splunk Enterprise Security (ES) on Wed, October 25, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific Enterprise Security (ES) challenge or use case, including: What’s new in Enterprise Security 7.2 Enterprise Security Content Update (ESCU) app and the latest security content Implementing use cases like RBA, incident management, threat hunting, etc. Implementing threat detections (including 6 new ML-powered detections) Enhancing notable events (e.g., using threat intelligence feeds) Adding adaptive response actions Recommended Splunkbase apps and add-ons for ES use cases Anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
Register here. This thread is for the Community Office Hours session on Splunk Enterprise Security: RBA on Wed, November 8, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions rel... See more...
Register here. This thread is for the Community Office Hours session on Splunk Enterprise Security: RBA on Wed, November 8, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific challenge or use case using Splunk Enterprise Security Risk-Based Alerting. Including: Implementing RBA in Splunk Enterprise Security Best practices for proper creation of risk rules, modifiers, etc. Troubleshooting and optimizing your environment for successful implementation Anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!