All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello to everyone! I planned to ingest *.csv files using Universal Forwarder from Windows Server 2019 in batch mode. It sounds pretty trivial, but I collided with the problem. After an appearance ... See more...
Hello to everyone! I planned to ingest *.csv files using Universal Forwarder from Windows Server 2019 in batch mode. It sounds pretty trivial, but I collided with the problem. After an appearance of a new file, I observe new events through a search head, and in the end, expecting that file will be deleted by Splunk UF, but the file is still remaining. It seemed that the problem was related to file access, but I can't find any related errors in the logs of this UF instance. So, what can be the root of this behavior? inputs.conf   [batch://C:\ProgramData\ScriptLog\spl_export_vmtools_status\vmtools_stats_*.csv] disabled = false index = vsi crcSalt = <SOURCE> move_policy = sinkhole sourcetype = vsi_file_vmtools-stats   props.conf   [vsi_file_vmtools-stats] ANNOTATE_PUNCT = false BREAK_ONLY_BEFORE_DATE = true INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = Time    
Hi all, Is it possible to get informations on the cluster manager config bundle through rest api? I am specifically looking for active bundle hash/Active Bundle ID.
Hi Team, I am Firewall engineer and working on creation of some dashboard. I have created one dashboard whenever our firewall failover the dashboard will show result as "Active" & " Standby" & "Dow... See more...
Hi Team, I am Firewall engineer and working on creation of some dashboard. I have created one dashboard whenever our firewall failover the dashboard will show result as "Active" & " Standby" & "Down" in Guage format. However I would like to set up the dashboard in this way : 1) whenever firewall failover and it is in "down state"  the guage color should be red 2) whenever firewall failover and it is in "Active state"  the guage color should be Green 3) whenever firewall failover and it is in "Standby state"  the guage color should be Amber   Does anyone know about it and help me with some sample examples to understand
Hi Team, I need help to created rex field for country from the sample log format as below. but country name position is not static and its getting change log by log in {}. can you help me to creat... See more...
Hi Team, I need help to created rex field for country from the sample log format as below. but country name position is not static and its getting change log by log in {}. can you help me to create regex field for country only ? sample1 Student":{"country":"IND","firstName":"XYZ","state":"MH","rollNum":147,"phoneNum":1478,"lastName":"qwe","phoneNu} sample2 :Student":{"firstName":"XYZ","state":"MH","rollNum":147,"country":"IND","phoneNum":1478,"lastName":"qwe","phoneNu} sample3 :Student":{"firstName":"XYZ","state":"MH","rollNum":147,"phoneNum":1478,"lastName":"qwe","phoneNu,"country":"IND"} so its mean, "country":"IND" anywhere in Student":{} should catch by regex
hi    i have registered for Splunk cloud and clicked start free trail, but still didn't receive the email with Splunk cloud free trail account details, like creds and link.
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. C... See more...
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. Cisco ASA Source type  has multiple sources(firewall) which ingest around 10 GB data on daily basis  suddenly one day  license usage reach to 20 GB. how to identify which source sent more data into Splunk without creating manual threshold or average of data.
Am new here I want to install and use slpunk on my iPhone and Mac, how do I install where do I start. 
Hello, Getting Action forbidden error when going to "https://<hostname>/en-US/app/search/analytics_workspace" on Splunk Cloud. Please note: I have logged with sc_admin role. Thanks
I've set up a dev 9.2 Splunk environment. And I'm trying to use a self-signed cert to secure forwarding. But every time I attempt to connect the UF to the Indexing server it fails -_- I've tried ... See more...
I've set up a dev 9.2 Splunk environment. And I'm trying to use a self-signed cert to secure forwarding. But every time I attempt to connect the UF to the Indexing server it fails -_- I've tried a lot of permutations of the below. All ultimately ending with the forwarder unable to connect to the indexing server. I've made sure permissions are set to 6000 for cert and key. Made sure the Forwarder and Indexer have seperate common names. And created multiple cert types. But I'm at a bit of a loss as to what I need to do to get the forwarder and indexer to connect over a self signed certificate. Any help is incredibly appreciated. Below is some of what I've attempted. Trying to not make this post multiple pages long X) Simple TLS Configuration Generating Indexer Certs: openssl genrsa -out indexer.key 2048 openssl req -new -x509 -key indexer.key -out indexer.pem -days 1095 -sha256 cat indexer.pem indexer.key > indexer_combined.pem Note: I keep reading that the cert and key need to be 1 file. But I"m not sure on this. Generating Forwarder Certs: openssl genrsa -out forwarder.key 2048 openssl req -new -x509 -key forwarder.key -out forwarder.pem -days 1095 -sha256 cat forwarder.pem forwarder.key > forwarder_combined.pem Indexer Configuration: [SSL] serverCert = /opt/tls/indexer_combined.pem sslPassword = random_string requireClientCert = false [splunktcp-ssl:9997] compressed = true Outcome: Indexer listens on port 9997 for encrypted communications. Forwarder Configuration [tcpout] defaultGroup = splunkssl [tcpout:splunkssl] server = 192.168.110.178:9997 compressed = true [tcpout-server://192.168.110.178:9997] sslCertPath =/opt/tls/forwarder_combined.pem sslPassword = random_string sslVerifyServerCert = false Outcome: Forwarder fails to communicate with Indexer Logs from Forwarder: ERROR TcpInputProc [27440 FwdDataReceiverThread] - Error encountered for connection from src=192.168.110.26:33522. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol Testing with openssl s_client: Command: openssl s_client -connect 192.168.110.178:9997 -cert forwarder_combined.pem -key forwarder.key Output: Unknown CA ( I didn't write the exact message in my notes, but it generally says the CA is unknown.) Note: Not sure if I need to add sslVersions = tls1.2, but that seems outside of the scope of the issue. Troubleshooting connect, running openssl s_client raw: Command: openssl s_client -connect 192.168.110.178:9997 Output received: CONNECTED(00000003) Can't use SSL_get_servername Full s_client message is here: https://pastebin.com/z9gt7bhz Further Troubleshooting Added Indexers self-signed certificate to forwarder ... sslPassword = random_string sslVerifyServerCert = true sslRootCAPath = /opt/tls/indexer_combined.pem Outcome: same error message. Testing with s_client: Command: openssl s_client -connect 192.168.110.178:9997 -CAfile indexer_combined.pem Connecting to 192.168.110.178 CONNECTED(00000003) Can't use SSL_get_servername Full s_client message is here: https://pastebin.com/BcDvJ2Fs
Hey folks, been a while - I have a question I figured community would be better to answer:   We have a multisite cluster using SmartStore built in AWS.  We are not going to new Splunk but need to b... See more...
Hey folks, been a while - I have a question I figured community would be better to answer:   We have a multisite cluster using SmartStore built in AWS.  We are not going to new Splunk but need to be able to access the data for the next 7 years and thus want to age it out, but it may need to be searched from time-to-time.   I understand we can convert to a Free license. However, does the architecture impact that (namely, that there is cluster replication)? Or is it possible to have a single standalone instance with Splunk Free to search as needed?
Right now, on a SOAR events/cases/playbooks menu page, a user can select a page size of 5, 10, 15, 25 or 50 which is the number of events, cases or playbooks to be displayed in a browser page. Is the... See more...
Right now, on a SOAR events/cases/playbooks menu page, a user can select a page size of 5, 10, 15, 25 or 50 which is the number of events, cases or playbooks to be displayed in a browser page. Is there a way to change that setting, for example, can SOAR display 100 events in one page? Thank you.
Hi All i have a bar chart, like this one, in some condition this may have a lot of values that need to be reported, but, as you can imagine, is not very readable is possible to specify a minimu... See more...
Hi All i have a bar chart, like this one, in some condition this may have a lot of values that need to be reported, but, as you can imagine, is not very readable is possible to specify a minimum size of each bar and enable to scroll bar to see (clearly....) all events?  Thanks.
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered su... See more...
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered suspicious, and I save it as a sheduled search and as an action I write it into the triggered alerts. the timeframe is -20m@m till -5m@m and the cron job is for every 5 minutes. now I see that there is an issue in that case, because if I cron the job every 5 minutes, given the look back timeframe, I'm getting at least 3 of the same events triggered as an alert.    now my question is, is there an option/way to trigger based on whether or not an event already occured ? so basically that the search looks - did I trigger that event before already? if yes, then don't write it in the triggered alerts, otherwise, write it in  the triggered alerts.    every help is appreciated
Hi it's possible to log on Splunk using Laminas\Log\Writer? ...I'll try to do but with some problem...do you have any esemple of how to do it?
The current universal forwarder 9.0.9 included in SOAR 6.2.2 is being flagged for an openssl vulnerability. Does anyone know what version UF is packaged in the 6.3.1 SOAR release?
I'm not able to even open a support ticket as there's a required field i can't fill in. tried with both gmail and my company account, there's no email/domain filtering in our domain.  
Dear Splunkers, running version 9.3.1 and I would like to perform a search in which I would like to identify what are the most common hours trucks have been visiting my site location. My search que... See more...
Dear Splunkers, running version 9.3.1 and I would like to perform a search in which I would like to identify what are the most common hours trucks have been visiting my site location. My search query is following: | addinfo | eval _time = strptime(Start_time,"%m/%d/%Y %H:%M") | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | search Plate!=0 | search Location="*" | timechart span=1h count by Plate limit=50 Like this Im able see trucks visiting location by time in a span. How to continue to display what are the most common hours during which my trucks visiting locations. Thank you
Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced th... See more...
Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced the below ERROR:   Any Ideas?
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, P... See more...
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, Percent and I join/ append some data from lookup using User. The lookup has multi-value fields which are prefixed with Lookup. User Category Data Percent LookupCategory LookupData LookupPercent LookupND1 LookupND2 User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 User871 102 5108 5.58 103 3897 7.31 5.23 2.08 User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11 My conditions are as follow: 1. Use Precedence Category if it's greater than current Category. For Ex below dataset: The Category is 103, I have to check which is the max(LookupPercent) between 101 to 103 and use it if the value in (101 or 102) is greater than 103. User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 2. Ignore if the LookupCategory has no CategoryValue equal to or greater than In below case Category is 102, but the lookup has only 103, but no data between 101 to 102. So ignore. User871 102 5108 5.58 103 3897 7.31 5.23 2.08 3. If the Lookup Current Category Percent is lesser than immediate following category, then find abs difference of Current Category with lookup Category and immediate following Category using Data field and if immediate following is near then use immediate following category. LookupCategory 104's Percent 0.30 is less than 105's 0.82. So as further step abs(664 - 1576) and abs(664 - 438), as (664 - 438) is less than (664 - 1576), the 105's row data should be filtered/ used. User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 4. Straight forward, none of above condition matches Same lookupCatagory 104's row should be used for Category 104. User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3... See more...
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3 months). Once data reaches the frozen state, how can I control the frozen storage if the frozen disk becomes full? How does Splunk handle the frozen storage in such scenarios? [custom_index] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/thawed/$_index_name/thaweddb homePath.maxDataSizeMB = 1664000 coldPath.maxDataSizeMB = 1664000 maxWarmDBCount = 200 frozenTimePeriodInSecs = 7776000 maxDataSize = auto_high_volume coldToFrozenDir = /opt/frozen/custom_index/frozendb