All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I would like to create an environment to practice Splunk enterprise as standalone Deployment  in Windows and I would also like to know that where to run the commands  as we do for linux
Interesting field Showing values count when I click its get automatically added search  its showing 0 events and if i use * then its work if i search for particular string then its showing 0 events ... See more...
Interesting field Showing values count when I click its get automatically added search  its showing 0 events and if i use * then its work if i search for particular string then its showing 0 events   index=abc   Index=abdc cluster_name="abc"   (not working)  Index=abdc cluster_name="*"      Showing Result 
Hi all, I am in a trouble to extract values from a structure. Here is the structure of a event:       Event{ ID: user_1 data: { c:[ { Case Name: case_A St... See more...
Hi all, I am in a trouble to extract values from a structure. Here is the structure of a event:       Event{ ID: user_1 data: { c:[ { Case Name: case_A Start Time: 2023.08.10 13:26:37.867787 Stop Time: 2023.08.10 13:29:42.159543 } { Case Name: case_B Start Time: 2023.08.10 13:29:42.159543 Stop Time: 2023.08.10 13:29:48.202143 } { Case Name: case_C Start Time: 2023.08.10 13:29:48.202143 Stop Time: 2023.08.10 13:29:51.193276 } ] } }         I tried to compose a table for lookup as below ID case_name case_start_time case_stop_time user_1 case_A 2023.08.10 13:26:37.867787 2023.08.10 13:29:42.159543 user_1 case_B 2023.08.10 13:29:42.159543 2023.08.10 13:29:48.202143 user_1 case_C 2023.08.10 13:29:48.202143 2023.08.10 13:29:51.193276   but I fail to comose as my expectation, I can only compose a table like this: ID case_name case_start_time case_stop_time user_1 case_A case_B case_C 2023.08.10 13:26:37.867787 2023.08.10 13:29:42.159543 2023.08.10 13:29:48.202143 2023.08.10 13:29:42.159543 2023.08.10 13:29:48.202143 2023.08.10 13:29:51.193276 Here is my code:       index="my_index" | rename "data.c{}.Case Name" as case_name, "data.c{}.Start Time" as case_start_time, "data.c{}.Stop Time" as case_stop_time | table ID case_name case_start_time case_stop_time         Can anyone help to compose the output table I need? I hope to seperate each case_name with its own case_start_time and case_stop_time.   Thank you so much.
Hi, I want to run the command "splunk reload deploy-server" on my deployment server, but it fails with the following error:     [root@server etc]# su splunk [splunk@server etc]$ splunk reload... See more...
Hi, I want to run the command "splunk reload deploy-server" on my deployment server, but it fails with the following error:     [root@server etc]# su splunk [splunk@server etc]$ splunk reload deploy-server Your session is invalid. Please login. ERROR: IP address 127.0.0.1 not in server certificate. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Couldn't request server info: Couldn't complete HTTP request: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed     I'm running Splunk Enterprise 9.0.4. The deployment server also acts as a license server and monitoring console. Of course, my certificate does not have the localhost IP in it.   My Splunk has a Systemd Unit File.   #This unit file replaces the traditional start-up script for systemd #configurations, and is used when enabling boot-start for Splunk on #systemd-based Linux distributions. [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network-online.target Wants=network-online.target [Service] Type=simple Restart=always ExecStart=/data/splunk/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 LimitRTPRIO=99 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Group=splunk Delegate=true CPUShares=1024 MemoryLimit=24949776384 PermissionsStartOnly=true ExecStartPost=-/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=-/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target     sslConfig Part of my server.conf     [sslConfig] useClientSSLCompression = true sslVersions = tls1.2 sslVerifyServerCert = true sslVerifyServerName = true requireClientCert = false serverCert = <Combined PEM Cert> sslRootCAPath = <Root CA PEM Cert> sslPassword = <Password> cliVerifyServerName = true       If you need any more info, let me know.
Hi everyone I got a question on the frozenTimePeriodInSecs parameter. Here are my settings inside the indexes.conf file: /opt/splunk/etc/system/local/indexes.conf [_internal] frozenTimePeriodI... See more...
Hi everyone I got a question on the frozenTimePeriodInSecs parameter. Here are my settings inside the indexes.conf file: /opt/splunk/etc/system/local/indexes.conf [_internal] frozenTimePeriodInSecs = 864000 # Data retention set to 10 days. maxTotalDataSizeMB = 750 [_audit] frozenTimePeriodInSecs = 864000 # Data retention set to 10 days. maxTotalDataSizeMB = 750 What I would expect is, that buckets in _internal and _audit where all events are older than 10 days get deleted. However, this is not the case. Anyone knows why? On the other hand, maxTotalDataSizeMB does work as expected. I have checked a couple places for hints why frozenTimePeriodInSecs does not work. The results of those checks are further down below as screenshots. - buckets: Whether there are buckets that contain only events older than 10 days. - btools: Whether the settings are actually taken into account. - monitoring console: Whether the settings are actually taken into account. - _internal logs: Check whether there are freeze events occuring. They only appear for maxTotalDataSizeMB. _audit Buckets _audit btool output monitoring console 1 monitoring console 2 freeze events
Hi, I'm trying to use the PREFIX directive in TSTATS (here : https://docs.splunk.com/Documentation/Splunk/9.1.0/SearchReference/Tstats#Use_PREFIX.28.29_to_aggregate_or_group_by_raw_tokens_in_indexed... See more...
Hi, I'm trying to use the PREFIX directive in TSTATS (here : https://docs.splunk.com/Documentation/Splunk/9.1.0/SearchReference/Tstats#Use_PREFIX.28.29_to_aggregate_or_group_by_raw_tokens_in_indexed_data). In the docs, it says that it can work with data that does not contain major breakers such as spaces. My data contains spaces so I decided to try to change the major breakers this way: props.conf: [test_sourcetype] SEGMENTATION = test_segments segmenters.conf: [test_segments] MAJOR = \t MINOR = / : = @ . - $ # % \\ _ [ ] < > ( ) { } | ! ; , ' " * \n \r \s & ? + %21 %26 %2526 %3B %7C %20 %2B %3D -- %2520 %5D %5B %3A %0A %2C %28 %29 This way, only the tab (\t) is considered as a major breaker. I applied this, restarted and tried to ingest a line of log with the sourcetype "test_sourcetype". Unfortunately, it seems the segmenters.conf does not work because it keeps breaking with a space for example. I also tried to remove all MINOR and keep only MAJOR, but no luck: MAJOR = \t MINOR =   Have I made a mistake? Is it possible to do what I want? I think so because in this .conf presentation (https://conf.splunk.com/files/2020/slides/PLA1089C.pdf) they mention it briefly (page 37). Should I also use  SEGMENTATION-<segment selection> = <segmenter> in props.conf ? The docs says it is for SplunkWeb but I am considering all options... Thanks
Hello In Splunk cloud monitoring console, there is a panel with  Restored searchable storage (DDAS) usage is it possible to search for detailed information such as which index was restored and t... See more...
Hello In Splunk cloud monitoring console, there is a panel with  Restored searchable storage (DDAS) usage is it possible to search for detailed information such as which index was restored and the size of each index ? In the console it shows only the total size of the restored data   thanks
L.s., Is it possible for a heavy forwarder to clone the data to a 9997/tcp output (S2S) and a 8088/tcp httpout (HEC)? So both will recieve the same events. We have a heavy wich has to send the da... See more...
L.s., Is it possible for a heavy forwarder to clone the data to a 9997/tcp output (S2S) and a 8088/tcp httpout (HEC)? So both will recieve the same events. We have a heavy wich has to send the data to two clusters.  One of these clusters we want the data to be recieved by HEC, the other only has S2S. thanks in advance Grts   Jari
Hello friends,   I'm fairly new to Splunk, so please bear with me here.   I have the output of the sar -u command on a solaris server. in the format:   Timestamp %usr %sys %wi... See more...
Hello friends,   I'm fairly new to Splunk, so please bear with me here.   I have the output of the sar -u command on a solaris server. in the format:   Timestamp %usr %sys %wio %idle %cpu   now i was able to create a line graph outputting all five values, but as soon as i take away even one of the categories, i only get timestamps but no other value. how can i specifically search to output only the cpu value as average in either a bar chart or filler gauge?   Thanks for reading. Best, Denipon 
On my deployment server, when running btool check against inputs.conf and 'grep'ing for the name of my manually created app (which has nothing but a local directory, an inputs.conf and an automatical... See more...
On my deployment server, when running btool check against inputs.conf and 'grep'ing for the name of my manually created app (which has nothing but a local directory, an inputs.conf and an automatically created app.conf file) I have a 'Invalid key in stanza [monitor]...' which complains about a line where I have; index = indexName And another error about; sourcetype = sourcetypeName I don't understand why Splunk doesn't like these lines. I can't find an appropriate inputs.conf.spec file where the issue could be fixed, but maybe I am not looking in correct place. When I run a btool check against all of our .conf files and Splunk is reporting that fields such as index, source, sourcetype, crcSalt, initCrcLength and more are invalid stanzas.   We have hundreds of such ‘invalid key’ errors. We also have hundreds of errors for “No spec file for:” for all .conf files other than inputs.conf – no such errors for inputs.conf. Maybe something major (or minor with major implications) went wrong after an upgrade?
Hello,   I am trying to automate the Splunk Enterprise installation however when I create the authentication.conf at deployment time. Visiting the URL only gives me a XML error. However If I manu... See more...
Hello,   I am trying to automate the Splunk Enterprise installation however when I create the authentication.conf at deployment time. Visiting the URL only gives me a XML error. However If I manually log on after removing the authentication.conf file and upload the XML LDP file it works. I originally thought the authentication.conf was the output of the XML upload. There is approximately 80-100 extra files in the splunk directory. Could someone point me in the direction of automating this XML upload part of the process Thank you
Hi , I am trying to make a search only if the values of lookup table i.e  groups.csv   fields  username  matches with the   username in the below search it should raise an alert . index=foo sourcet... See more...
Hi , I am trying to make a search only if the values of lookup table i.e  groups.csv   fields  username  matches with the   username in the below search it should raise an alert . index=foo sourcetype=WinEventLog | stats values(username) as username, values(Target_Domain) as Domain by userid Thanks
We have an index, say 'index1' that has log retention upto 7 days. As the log volume is huge, we dont want to retain all logs there for more than 7 days. However, there is also requirement to reta... See more...
We have an index, say 'index1' that has log retention upto 7 days. As the log volume is huge, we dont want to retain all logs there for more than 7 days. However, there is also requirement to retain some logs for later use, say some errors logs that we want to inspect later. So the solution we though of is to use 'collect' and have it in a separate index say 'index2' which has a greater retention, say 6 months. So I planned on using the below way for this   index=index1 level=ERROR | collect index=index2 output_format=hec   *using output_format=hec because we want to use exact same source and source type to have the field extractors working exactly like the original index However there are some questions I have with this. 1. Does this method use license? The doc says below statements which is kind of confusing. a)Allows the source, sourcetype, and host from the original data to be used directly in the summary index. b)No license is counted for the internal stash source type. License is counted when the original source type is used instead of stash in output_mode=hec https://docs.splunk.com/Documentation/Splunk/9.0.2/SearchReference/Collect 2. The document also mentions "This command is considered risky because, if used incorrectly, it can pose a security risk or potentially lose data when it runs" But its not clear how is it risky and what are the things to make sure to avoid problems? 3. It looks like 'collect' command can be used by any user. I tried removing 'run_collect' capability and it also doesnt prevent a role from using collect. How to only allow certain roles to use 'collect' command? 4. The collect command is basically writing to an index. Is there a way to restrict a role from writing data to an index using 'collect' or any other command?
Hi there, Can a frozen bucket be an excess bucket ? Additional Context: Multisite cluster, Splunk enterprise V8.1.5 Regards, Shashwat
Hi , Im trying to extract distinct email is as column and preparing some counts .For this im thinking to extract the email data from log field . Can someone please provide pointers { "log": " \u001b... See more...
Hi , Im trying to extract distinct email is as column and preparing some counts .For this im thinking to extract the email data from log field . Can someone please provide pointers { "log": " \u001b[2m2023-08-09 21:28:28.347\u001b[0;39m \u001b[32mDEBUG\u001b[0;39m \u001b[35m1\u001b[0;39m \u001b[2m---\u001b[0;39m \u001b[2m[nio-8080-exec-7]\u001b[0;39m \u001b[36ms.s.w.c.SecurityContextPersistenceFilter\u001b[0;39m \u001b[2m:\u001b[0;39m Set SecurityContextHolder to SecurityContextImpl [Authentication= SCOPE_profile1]], User Attributes: [{ email=venkatanaresh.mokka@one.verizon.com}], Credentials=[PROTECTED] ]]\n", "stream": "stdout", "kubernetes": { "container_name": "draftx-ui-gateway", } }
We have many alerts setup in Splunk, so how can I get the list of alerts corn scheduled for 10mins   
Only downloads I see listed are the unprivileged tgz files and when I run the install I get the following error: Error: Box being upgraded has installation type priv but this installer has type unp... See more...
Only downloads I see listed are the unprivileged tgz files and when I run the install I get the following error: Error: Box being upgraded has installation type priv but this installer has type unpriv. Please visit the release page and download the priv installer variant. Where can the privileged tgz files be downloaded? If the privileged tgz files are unavailable should I run the unprivileged installer with the --ignore-warnings flag? Thanks in advance for all the help!
Hello Splunk Community, I'm encountering an issue with my search queries in Splunk that I hope someone can help me with. When I run a search, Splunk often indicates that a subset of events has match... See more...
Hello Splunk Community, I'm encountering an issue with my search queries in Splunk that I hope someone can help me with. When I run a search, Splunk often indicates that a subset of events has matched (e.g., 2 of 10,000 events matched), but the "Events" panel only shows the count in brackets and does not display the actual results. The main concern here is that these long-running queries frequently fail, and no data is returned at all. This is particularly frustrating when I know that some events have already matched. What I'm looking for is a way to have Splunk return the matched events as they are found, without waiting for the entire search to be completed. In other words, if 2 events have matched, I'd like to see those 2 events immediately, even if the search is still ongoing. Is there a configuration or query modification that would allow this behavior? Any guidance or insights would be greatly appreciated. Thank you in advance for your assistance! I have also attached a screenshot for reference.
Can you leverage the total derived using the addcoltotals command to support other calculations? i.e. can you use it to calculate a percentage?  | addcoltotals count labelfield="total" | eval perce... See more...
Can you leverage the total derived using the addcoltotals command to support other calculations? i.e. can you use it to calculate a percentage?  | addcoltotals count labelfield="total" | eval percent=((count/total)*100) | table host count percent    
I am trying to do a tstats command to get the last logged time a server has sent logs.  My server list i want in the table is in a lookup csv. The command i am using is  Tstats latest(_time) as l... See more...
I am trying to do a tstats command to get the last logged time a server has sent logs.  My server list i want in the table is in a lookup csv. The command i am using is  Tstats latest(_time) as lastseen where (index=windows) by host | convert ctime(lastseen) The "where" clause i would like to be something like "where the server name is on the lookup table"   Basically trying to filter the output of the query to just any server i have in the lookup table