All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I’m receiving an error whenever I try to view any csv lookup tables I have uploaded into my search head cluster (v8.1.6).   Uploading the same csv files on to my local sandbox works without issue.   ... See more...
I’m receiving an error whenever I try to view any csv lookup tables I have uploaded into my search head cluster (v8.1.6).   Uploading the same csv files on to my local sandbox works without issue.     With the query   | inputlookup <filename>.csv   I receive the error   The lookup table '<filename>.csv' requires a .csv or KV store lookup definition.   The .csv files appear on the local file system and propagate across the cluster properly.  The splunkd.log also doesn't give any information beyond what the UI already outputs.
Hello, I have CSV (with epoch time) source files (file with a few sample events given below) with header info. I wrote a props configuration file (see below). I tested this props with a few events ... See more...
Hello, I have CSV (with epoch time) source files (file with a few sample events given below) with header info. I wrote a props configuration file (see below). I tested this props with a few events and working as expected. Do you have any recommendation on this props configuration file or I am good to go with this props.conf? Also is there any way I can change the field name (i.e., id as ID, created as TIMESTAMP.........so on)? Your feedback and help will be highly appreciated. Thank you so much. Sample csv with epoch time: props.conf that I Wrote:prop [ csv ] SHOULD_LINEMERGE=false CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv category=Structured HEADER_FIELD_LINE_NUMBER=1 TIMESTAMP_FIELDS=created TIME_FORMAT=%s%9N MAX_TIMESTAMP_LOOKAHEAD=14      
I'm not sure if I'm missing something simple or not, but I've got event logs from my Salesforce instance fed in, as well as the User object, and for some reason I can aggregate on some fields of User... See more...
I'm not sure if I'm missing something simple or not, but I've got event logs from my Salesforce instance fed in, as well as the User object, and for some reason I can aggregate on some fields of User but not others ... even though the fields exist in Splunk.   index=sfdc sourcetype=LightningPageViewCSV |join USER_ID [ search sourcetype=sfdc:user | eval USER_ID=substr(Id,1,len(Id)-3) ] |stats avg(EFFECTIVE_PAGE_TIME) by Name   // this works to aggregate by the user's name. Not really useful but it was a test to make sure something came through. The substring is b/c one object uses the 18-char Salesforce Id, the other uses the shortened 15-char Id.    index=sfdc sourcetype=LightningPageViewCSV |join USER_ID [ search sourcetype=sfdc:user | eval USER_ID=substr(Id,1,len(Id)-3) ] |stats avg(EFFECTIVE_PAGE_TIME) by State__c,Loc__c   //no results from this for some reason ... State__c and Loc__c are custom fields on User.   index=sfdc sourcetype=sfdc:user index=sfdc sourcetype=sfdc:user Name="[one of the names from the first query]"   //I run these just to see what I've got in my user object and I can see several people with non-null State__c and Loc__c This is a new dev org I just spun up so I'm not sure if I missed a step in adding these sources or not. The LightningPageViewCSV is an imported static CSV file of the EventLogFile for testing. The sfdc:user was a one time read in of the User object. Both of these are tied to the sfdc index.
Hello,  Thank you for taking the time to consider my question/situation. I am working on removing static deploymentclient.conf configurations (located on endpoints under $SPLUNK_HOME/etc/system/loc... See more...
Hello,  Thank you for taking the time to consider my question/situation. I am working on removing static deploymentclient.conf configurations (located on endpoints under $SPLUNK_HOME/etc/system/local) in my organization in favor of using app-based configurations for this, which are sent from the existing deployment server.  Initially I had no issues removing the existing deploymentclienttest.conf file within /etc/system/local on the deployment client using  a windows batch file (.bat) stored under the /etc/deployment-apps/<appName>/bin/<nameOfRemovalscript>.bat. The contents of the bat file are shown below: del "C:\Program Files\SplunkUniversalForwarder\etc\system\local\deploymentclienttest.conf" The 'inputs.conf' that was stored in the same custom app under the local/ directory is as shown below:   [script://C:\Program Files\SplunkUniversalForwarder\etc\apps\<nameofApp>\bin\<replaceDeploymentClient>.bat] interval = -1 source = replaceDeploymentClient sourcetype = scriptedInput index = _internal disabled = 0   However since I did this, my workstation no longer actually runs any scripts (I've tested .bat and .cmd scripts, no python or ps1) I've tried referring to the script using both absolute (shown above) and relative file paths, as well as storing the .bat file within <appname>/bin/scripts/ incase that was something that was needed, but it wasn't configured that way when I got it to work the first time.  My question is essentially this: what would cause a UF to just not be able to run scripts deployed by the DS anymore? If I go into the app and manually run the script it removes the files and does whatever other commands I entered just fine, so what gives? I'm beginning to think this is a bug, but I still have hope that this is just the result of a bad config one place or another.  Please advise on any further troubleshooting I can do. I should note that within Splunkd.log on the UF it says that the script has been scheduled to run whenever I deploy it with "restart splunkd" enabled for the app, but even that doesn't seem to do the trick.  Any help is appreciated, and thanks in advance!
Hello,  I hope to get some guidance regarding configuring a Splunk web interface to be public facing while keeping the management side on a private interface. Some of the information I have read fr... See more...
Hello,  I hope to get some guidance regarding configuring a Splunk web interface to be public facing while keeping the management side on a private interface. Some of the information I have read from our esteemed experts is a bit confusing to me. I am tracking that I am able to make changes to the web.conf file to alter the default IP/interface but there is a caution that i should also change the management side with that.  For security reasons and separation of duties, I am hoping to set it up that only persons who have physical access to the private network can make managerial changes but allow access to the web interface for analysts outside the immediate area to use the SEIM. I Is this even possible or am I seeking to set something up that is largely moot?  v/r Matt
In Splunk Cloud, when I go to change the time picker it brings up relative options.  It used to bring up presets.  How do I get presets to be the default.  PS: nothing I do in Settings, Server Settin... See more...
In Splunk Cloud, when I go to change the time picker it brings up relative options.  It used to bring up presets.  How do I get presets to be the default.  PS: nothing I do in Settings, Server Settings, Search Preferences seems to take.  It's set to 7 days but Search and Reporting still shows "last 30 minutes".  SO I have to change the time picker, then go to presets, then select whatever preset I want. 
Hi Splunkers, I have a requirement to show a single value panel with the total number of connections to a server and change the color to RED of the panel when the connection is down (which is not bei... See more...
Hi Splunkers, I have a requirement to show a single value panel with the total number of connections to a server and change the color to RED of the panel when the connection is down (which is not being shown on the panel). I've tried using the classField and range but it seems those are depreciated. I tried searching this forum but couldn't find any relevant options. Is there any other alternative to get this done? Please help. Data: session - name of the session (can be many) server - server name can be many (used trellis for this purpose) STATUS - Status of the connected, can be either UP or DOWN. I've used rangevalues in the below simple xml which isnt working as expected.       <form> <label>Color My Text</label> <fieldset submitButton="false"> <input type="time" token="time"> <label>time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <single> <search> <query>index=* | stats count by session,server,STATUS | foreach server [eval range=if('STATUS'="DOWN","severe", "low")] | chart count by server</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="classField">range</option> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="field">count</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[1]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trellis.splitBy">server</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitserver">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">0</option> </single> </panel> </row> </form>      
Hi There, I have two Application log messages that I receive in Splunk  1. Service stopped 2. Service Started I need to create an alert if the "service started" log message does not show up w... See more...
Hi There, I have two Application log messages that I receive in Splunk  1. Service stopped 2. Service Started I need to create an alert if the "service started" log message does not show up within 10 minutes of the "Service  stopped" log message. So the alert needs to trigger an email only if it has been more than 10 min since the service stopped an a new log message stating Service started does not show up in the logs. I am finding some solutions here, but need one that will compare the log messages, I am new to splunk please do share the syntax as I would not know how to work it out without it. index=* | search app=xxx log="xxx" message="*service stopped/started*"
I am trying to separate multi value rows into their own rows. I have been trying to separate by adding a comma after the end of each row and then splitting them based on the comma, but I am only able... See more...
I am trying to separate multi value rows into their own rows. I have been trying to separate by adding a comma after the end of each row and then splitting them based on the comma, but I am only able to split the first repetition of the pattern. Can anyone help?   Example: I have rows like this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com                      instance2.com  instance3.com                      instance4.com 2022-02-28 2022-03-3   And I would like to transform them into this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com 2022-02-28 2022-03-02 Domain1.com instance2.com 2022-02-28 2022-03-02 Domain1.com instance3.com 2022-02-28 2022-03-02 Domain1.com instance4.com 2022-02-28 2022-03-02  
Is it possible to apply the color based on the first value on the multi-value fields? Below is the sample data. If the first value of server_status is Online then the field color should turn into ... See more...
Is it possible to apply the color based on the first value on the multi-value fields? Below is the sample data. If the first value of server_status is Online then the field color should turn into green else the color will be red. hostname server_status server101 Online Offline (31 days ago)         hostname server_status server101 Offline (31 days ago) Online
Hi Splunkers, I need help on how to sort this multi-value fields based on the latest timestamp and status. Here's my dummy query for this. | makeresults | eval hostname = "server101" | eval ... See more...
Hi Splunkers, I need help on how to sort this multi-value fields based on the latest timestamp and status. Here's my dummy query for this. | makeresults | eval hostname = "server101" | eval id = "123|124" | eval database_timestamp = "Mar 03, 2022 12:59:46 PM|Feb 23, 2022 1:19:24 PM" | eval database_status = "Online|Offline (30 days ago)" | eval server_timestamp = "Feb 22, 2022 1:19:24 PM|Mar 01, 2022 12:59:46 PM" | eval server_status = "Offline (31 days ago)|Online" | fields hostname id database_timestamp database_status server_timestamp server_status | makemv delim="|" database_timestamp | makemv delim="|" database_status | makemv delim="|" server_timestamp | makemv delim="|" server_status | makemv delim="|" id Below is the sample output and expected output. Current Output:                   hostname database_timestamp database_status server_timestamp server_timestamp server101 Mar 03, 2022 12:59:46 PM Feb 23, 2022 1:19:24 PM Online Offline (30 days ago) Feb 22, 2022 1:19:24 PM Mar 01, 2022 12:59:46 PM Offline (31 days ago) Online           Expected Output:                   hostname database_timestamp database_status server_timestamp server_status server101 Mar 03, 2022 12:59:46 PM Feb 23, 2022 1:19:24 PM Online Offline (30 days ago) Mar 01, 2022 12:59:46 PM Feb 22, 2022 1:19:24 PM Online Offline (31 days ago)  
Hello, We are using Okta Identity Add on ( https://splunkbase.splunk.com/app/3682/#/details ) for about 5+ months now.   We discovered that the user import has not been able to fetch all the user ac... See more...
Hello, We are using Okta Identity Add on ( https://splunkbase.splunk.com/app/3682/#/details ) for about 5+ months now.   We discovered that the user import has not been able to fetch all the user accounts.   As per the product documentation,  the users input job imports all the user account  in its 1st run and thereafter in subsequent runs, it only brings in the users who have been modified or changed.   But in our case, we are seeing that even the 1st run did not bring in everything.  My question is , Is there a way to manually run the user import to fetch everything from scratch ?   These are our settings
Hello, I have a situation where I am trying to pull from within a field the nomenclature of ABC-1234-56-7890 but want to be able to only pull the first three letters and the last four numbers into ... See more...
Hello, I have a situation where I am trying to pull from within a field the nomenclature of ABC-1234-56-7890 but want to be able to only pull the first three letters and the last four numbers into one field. I have the following query below thus far but have not figured out how to do as described above: | rex field=comment (?<ABC>ABC\-\d+\-\d+\-\d+) I want the return of "ABC-7890" What am I missing so that I can successfully pull both beginning and end of the above described string? Thanks!
While preparing to upgrade of an indexer cluster with RF=1 I'm wondering what's the effective behaviour of a cluster in maintenance mode with this RF. If an indexer goes down because of the upgrade... See more...
While preparing to upgrade of an indexer cluster with RF=1 I'm wondering what's the effective behaviour of a cluster in maintenance mode with this RF. If an indexer goes down because of the upgrade activity and restart, there is no data to replicate to other nodes anyway so no fixups should occur. So maintenance mode does not really do much in this case, am I right?
Hi there, so I have a line of log like this: http://some.url/path/?param=x,y,z  So I want to extract a field "extractedParam" with the value "x,y,z". Then I want to extract the three values i... See more...
Hi there, so I have a line of log like this: http://some.url/path/?param=x,y,z  So I want to extract a field "extractedParam" with the value "x,y,z". Then I want to extract the three values into a multivalue field "mvExtractedParam". Within Splunk Cloud I will use a field extraction with the following regex which is wrapped up by a field transformation (here I can check "Create multivalued fields"). So I try to do everything within one regex, but here I am struggling. \?param=(?<extractedParam>.*) This extracts "x,y,z". Right now I dont know how to chain the next step...   All the best, Marco
Hello I have installed Splunk Enterprise on Ubuntu 20.04 two times now, but I get warnings from licensing when adding sources. I installed a 5GB/days license and added a syslog udp/1514 and a new... See more...
Hello I have installed Splunk Enterprise on Ubuntu 20.04 two times now, but I get warnings from licensing when adding sources. I installed a 5GB/days license and added a syslog udp/1514 and a new index. After this splunk starts complaining about:   This deployment is subject to license enforcement. Search is disabled after 45 warnings over a 60-day window Learn more Licensing alerts notify you of excessive indexing warnings and licensing misconfigurations     1 cle_pool_over_quota message reported by 1 indexer Correct by midnight to avoid warning   Can anyone help me in the right direction ? The total amout of data = 0MB, so this is clearly not correct. Regards, Jon
Hi everyone,   I have an issue with upgrade splunk universal forwarder 7.3.3 to 8.1.3 (windows platform). During our investigation, we found that the problem only occurs on machines that were prev... See more...
Hi everyone,   I have an issue with upgrade splunk universal forwarder 7.3.3 to 8.1.3 (windows platform). During our investigation, we found that the problem only occurs on machines that were previously operated by UF 6.5.2. We tried a few tricks with msi package recache, repair or uninstall, but can't find a solution to install version 8.1.3. No problem going back to version 7.3.3, we do the standard install and everything works fine. No matter what we do, in the 8.1.3 installation log we still find that the msi installer is finding a previous version of product 6.5.2! (we have work station 7.3.3) Do you have an idea what we can try to do?
Hello, i would like to improve Escalation Policy in our organization. Currently everyone has another settings, but we want to introduce one standard for each user - is that possible?  If yes, coul... See more...
Hello, i would like to improve Escalation Policy in our organization. Currently everyone has another settings, but we want to introduce one standard for each user - is that possible?  If yes, could you give me some tip's/template how we can do this with your support?
Prior to upgrade from 8.1 to 8.2 I'm reading https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Reducetsidxdiskusage#The_tsidx_writing_level and one thing is not entirely clear to me. A cha... See more...
Prior to upgrade from 8.1 to 8.2 I'm reading https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Reducetsidxdiskusage#The_tsidx_writing_level and one thing is not entirely clear to me. A change to the tsidxWritingLevel is applied to new index bucket tsidx files. There is no change to the existing tsidx files. A change to the tsidxWritingLevel is applied to newly accelerated data models, or after a rebuild of the existing data models is initiated. All existing data model accelerations will not be affected. The first statement is pretty starightforward - if I raise the tsidxWritingLevel, only newly created buckets will be indexed with the new level. That's pretty obvious. But I'm not entirely sure what the description of accelerated data models means. If it worked the same way I'd suspect that already created summaries should be left as-is on their own level but newly created summary "buckets" (are they still called that in case of datamodel accelerated summaries?) should be created with the new level. Is that so? Or does it apply to whole acceleration summary only after a complete rebuild? That would be kinda unfortunate especially since I have some huge accelerated datamodels.
Hello, What could be the explanation for a Correlation Search that is set to run live, on the Next Scheduled Time tab in /app/SplunkEnterpriseSecuritySuite/ess_content_management it appears that th... See more...
Hello, What could be the explanation for a Correlation Search that is set to run live, on the Next Scheduled Time tab in /app/SplunkEnterpriseSecuritySuite/ess_content_management it appears that the Next Scheduled Time to be in the past. (today is 3rd of march) This is also not triggering any events in the Incident Review Tab in Enterprise Security app.   Thanks to anyone that can give any hints I appreciate