All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All,   We have an index cluster which utilizes SmartStore in AWS S3.  Things appear to be working but we have observed the following in our logs on the index peers. 08-02-2021 12:34:53.725 -... See more...
Hello All,   We have an index cluster which utilizes SmartStore in AWS S3.  Things appear to be working but we have observed the following in our logs on the index peers. 08-02-2021 12:34:53.725 -0400 ERROR IndexerIf [22952 FilesystemOpExecutorWorker-0] - failed to update bucket bid=sysandy_test2~2~32FED627-479E-41CF-A401-2F061C2EF7E5 with remote metadata due to err= 08-02-2021 12:45:04.668 -0400 ERROR IndexerIf [24603 FilesystemOpExecutorWorker-0] - failed to update bucket bid=sysandy_test2~3~32FED627-479E-41CF-A401-2F061C2EF7E5 with remote metadata due to err= 08-02-2021 12:55:21.684 -0400 ERROR IndexerIf [25930 FilesystemOpExecutorWorker-0] - failed to update bucket bid=sysandy_test2~1~80593238-39FB-443D-8E13-8FC3E521B22C with remote metadata due to err= It appears that these errors sometimes occur only on one or two of the three index peers but result in a tsidx file that that is different locally then the S3 copy.   It is unclear as to why this is happening and the err= appears to be blank. Has anyone ever seen this behavior and any suggestions for resolving this ? Thanks.    
Hello all, I have one sourcetype that does not allow me to create a static field extraction, because we have several fields with different name and is almost impossible to cover all of them.   My ... See more...
Hello all, I have one sourcetype that does not allow me to create a static field extraction, because we have several fields with different name and is almost impossible to cover all of them.   My data is similar to this: fieldname1 : values1 with spaces - fieldname2 : value2 - fieldname3 : value-for-field3 field name4 : values4withoutspaces - fieldname5 : value5 (this should be included in value5) - fieldname6 : value-for-field3 fieldname7 : All kv pairs are delimitd by " - " and the pair delimiter  is " : " . To cover this requirement, I have a field transforms that uses a regex to calculate key-value pairs automatically [wildcard_extractions] CLEAN_KEYS = 0 FORMAT = $1::$2 REGEX = (\S+)\s:\s(\S+)   PROBLEM: When the field name or the value has spaces,  I can not get the full values.   Could some, more experienced than me, help me with my regex expression, please? https://regex101.com/r/R9XhmD/1      
Hi, I want to place a panel on top of existing panel on the click of a cell. First screen: Second Screen: Is this feasible through Splunk?    
I am looking to add icon beside my panel title somewhat like this: However, I m getting like this .Can anybody suggest?   <panel id="mon1"> <html> <div> <img height="50p... See more...
I am looking to add icon beside my panel title somewhat like this: However, I m getting like this .Can anybody suggest?   <panel id="mon1"> <html> <div> <img height="50px" width="50px" src="/static/app/ERPTower/icons8-in-transit-96.png" style="float:left"/> </div> </html> <title>SHIPMENT FULFILLMENT</title> <table> <search> <query>|makeresults| eval name1="Ship Confirmation Request" |eval name2="Shipment Loads"|eval name3="Ready to be Shipped loads"|table name1 , name2, name3|transpose|fields - column|rename "row 1" as " "</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> <html> <style> #mon1{ width: 20% !important; align: center !important; text-align: center !important; padding: 0px !important; margin: 0px 0px 0px 0px !important; } </style> </html> </panel>  
We are having an issue with the "Splunk_TA_nix/bin/ps.sh" script and the way it's reporting cpu usage for servers with multiple cpu's. It caps the cpu at 100, but that value can actually go up to the... See more...
We are having an issue with the "Splunk_TA_nix/bin/ps.sh" script and the way it's reporting cpu usage for servers with multiple cpu's. It caps the cpu at 100, but that value can actually go up to the number of cpu's * 100. So a server with 32 CPU's can reach 3200. Limiting that value to 100 seems to gives incorrect information. Can anyone give some insight into what's going on here? We are on Enterprise 7.3 if that makes a difference. Thanks
I get this error message in my ES "Intelligence download of "mittre_attack" has failed on this host. I have Splunk Enterprise + ES both on linux servers.
While creating a dashboard i used scheduled reports to present visuals. The problem is, the reports have overlapping queries as they the dashboard was  originally implemented with base searches. Ho... See more...
While creating a dashboard i used scheduled reports to present visuals. The problem is, the reports have overlapping queries as they the dashboard was  originally implemented with base searches. How can I use scheduled reports as base searches? or, How can i use data from the same scheduled report and create different visuals in the same Dashboard? Thanks and Regards.
i have noticed that there is a notable events when we tried to open the correlation search related to that notable event it said " search doesn`t exist "  , also i can`t find it in content management... See more...
i have noticed that there is a notable events when we tried to open the correlation search related to that notable event it said " search doesn`t exist "  , also i can`t find it in content management , after searching i found the saved search under search app in local/savedsearch.cong , why it don`t show up in the content management and how to solve that issue? Thanks.
After windows UF upgrade from 7.1.10 to 8.2.0, we are getting the spunk perfmon.exe counters not found error in splunk internal index. However windows perfmon data is coming in Splunk. Before upgrade... See more...
After windows UF upgrade from 7.1.10 to 8.2.0, we are getting the spunk perfmon.exe counters not found error in splunk internal index. However windows perfmon data is coming in Splunk. Before upgrade I couldn't see any this type of error. Please help me on this. 08-03-2021 07:55:02.999 -0500 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"" splunk-perfmon - OutputHandler::composeOutput: Counters not found: % Processor Time   08-03-2021 07:35:02.734 -0500 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-perfmon.exe"" splunk-perfmon - OutputHandler::composeOutput: Counters not found: IO Read Operations/sec
How do you format an array using TA-webtools GET?  Trying to filter the get response using an array.    severity=Critical, High, Moderate
Rolling out splunkforwarder in the enterprise using RPM install, but having no luck with some old legacy RHEL 5 servers. They are running 64-bit kernel 2.6*, so it should work. But rpm -i fails with... See more...
Rolling out splunkforwarder in the enterprise using RPM install, but having no luck with some old legacy RHEL 5 servers. They are running 64-bit kernel 2.6*, so it should work. But rpm -i fails with this message: # rpm -ivh splunkforwarder-8.2.1-ddff1c41e5cf-linux-2.6-x86_64.rpm error: splunkforwarder-8.2.1-ddff1c41e5cf-linux-2.6-x86_64.rpm: Header V4 RSA/SHA256 signature: BAD, key ID b3cd4420 error: splunkforwarder-8.2.1-ddff1c41e5cf-linux-2.6-x86_64.rpm cannot be installed Is the issue that RHEL 5 has a problem with V4 RSA? Am I stuck having to install from tarball? Kernel version on this server: # uname -a Linux intwebhfindev 2.6.18-419.el5 #1 SMP Wed Feb 22 22:40:57 EST 2017 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.11 (Tikanga) Thanks
    Hi, I have had this problem twice now on two different computers.    I install debian/ubuntu  Then I use the .deb file from splunk.com - the newest you get when you start your trial ... See more...
    Hi, I have had this problem twice now on two different computers.    I install debian/ubuntu  Then I use the .deb file from splunk.com - the newest you get when you start your trial I then make a VM based on a distro of ubuntu/debian and install Everything workes fine, I use my own user, not the root user to install I follow the steps of https://www.bitsioinc.com/tutorials/install-splunk-linux/  Splunk starts and works fine. No problems..until I restart the VM. Then localhost:8000 and the external ip both stop working and splunk web gui is not possible to connect to. I use the same account both for installation and for login when restarting   What am I doing wrong? Other VM's don't report problems on the machine, its only splunk, and only after restarting of the vm. The VM itself has internet and works fine, its only splunk that has issues.   
Splunk Enterprise Version:8.1.3 Hi All Can Splunk Enterprise Version:8.1.3 handle circular log?  
Hi, hello, Splunk is not showing up miliseconds for JSON logs. I have find some Questions and Answers here in splunk community, but without success. Description: I have HFs, indexer cluster and se... See more...
Hi, hello, Splunk is not showing up miliseconds for JSON logs. I have find some Questions and Answers here in splunk community, but without success. Description: I have HFs, indexer cluster and search head cluster. HF props.conf [k8s:dev] #temporary removed to fix 123123 #INDEXED_EXTRACTIONS = JSON TIME_PREFIX = {\\"@timestamp\\":\\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N TRUNCATE = 200000 TRANSFORMS-discard_events = setnull_whitespace_indented,setnull_debug_logging SEDCMD-RemoveLogProp = s/("log":)(.*)(?="stream":)//   HF transforms.conf [setnull_java_stacktrace_starttab] SOURCE_KEY = field:log REGEX = ^\tat\s.* DEST_KEY = queue FORMAT = nullQueue [setnull_whitespace_indented] SOURCE_KEY = field:log REGEX = ^\s+.* DEST_KEY = queue FORMAT = nullQueue [setnull_debug_logging] SOURCE_KEY = field:log REGEX = .*?\sDEBUG\s DEST_KEY = queue FORMAT = nullQueue   Search props.conf   #workaround, see 123123 [k8s:dev] KV_MODE = json   Everything looks fine in web ADD DATA in HF and SEARCH too. But not when I search it.   I can insert only part of the JSON. {"log":"{\"@timestamp\":\"2021-08-03T09:00:57.539+02:00\",\"@version\":\"1\",\"message\":   Also when I am in HF ADD DATA and I remove TIME_PREFIX and TIME_FORMAT the miliseconds still appear, but when I a little bit "destroy" TIME_PREFIX there is error and file timestamp is used(I think its file timestamp). Question is: 1.what am I doing wrong?  2. Is it possible to configure TIME_PREFIX and TIME_FORMAT for KV_MODE on search? Because as I know they are used in HF during parsing. 3. Is it possible to configure KV_MODE?   Thank you very much for your suggestions.    
In the way to test ITSI, I first installed IT Essentials Work on my single standalone splunk server following the instruction from the link  https://docs.splunk.com/Documentation/ITEWork/4.9.2/Inst... See more...
In the way to test ITSI, I first installed IT Essentials Work on my single standalone splunk server following the instruction from the link  https://docs.splunk.com/Documentation/ITEWork/4.9.2/Install/Install#Install_IT_Essentials_Work_on_a_single.2C_on-premises_instance I simply stop the service, unzip the tgz and start splunk. once done, I go to the essential work app and I get the following error on the infrastructure overview any idea of what could be happening? I could not find anything in the logs so far. thanks  
Hi , In one of my field I have data in below format , I want data to be displayed day wise, like time for each day separately  Any suggestions ? Mon-Sat: 10AM-9PM, Sun: 11AM-6PM Mon-Sat: 9:3... See more...
Hi , In one of my field I have data in below format , I want data to be displayed day wise, like time for each day separately  Any suggestions ? Mon-Sat: 10AM-9PM, Sun: 11AM-6PM Mon-Sat: 9:30AM-9:30PM, Sun: 10AM-8PM Mon-Wed: 9:30am - 9pm, Thu: 9:30am - 9pm, Fri: 9:30am - 9pm, Sat: 9am - 9pm, Sun: 10am - 7pm Mon-Sat: 9:30AM-9:30PM, Sun: 10AM-8PM Mon-Sat: 9:30AM-9:30PM, Sun: 10AM-8PM Mon-Wed: 9:30am - 9pm, Thu: 9:30am - 9pm, Fri: 9:30am - 9pm, Sat: 9am - 9pm, Sun: 10am - 7pm Mon-Sat: 9:30AM-9:30PM, Sun: 10AM-8PM Mon-Sat: 9:30AM-9:30PM, Sun: 10AM-8PM Mon-Wed: 9:30am - 9pm, Thu: 9:30am - 9pm, Fri: 9:30am - 9pm, Sat: 9am - 9pm, Sun: 10am - 6pm Mon-Sat: 9:30AM-9:30PM, Sun: 10AM-8PM
Hello, is there a possibility to access these fields?   Thanks, Ava
Hi Splunkers,   I'm using splunk 8.2.1 with splunk stream 7.3 . I'm using the deployment server for the deployment of Splunk_TA_stream.  I'm getting Data to Splunk but the "Host" is set to "$decid... See more...
Hi Splunkers,   I'm using splunk 8.2.1 with splunk stream 7.3 . I'm using the deployment server for the deployment of Splunk_TA_stream.  I'm getting Data to Splunk but the "Host" is set to "$decideOnStartup". On the other hand Stream Forward ID is set to the correct hostname.  I followed this Guide (https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/InstallSplunkAppforStreaminadistributeddeployment) but Step 5 of "Use the deployment server to distribute the Splunk Add-on for Stream Forwarders" did not work for me, so I copied the content of Splunk_TA_stream manual to deployment-apps.   Any Ideas?  
Hello Everyone, I have written props.conf in which i have added the below eval statement Eval-appname="newapp" and other Extract commands are also there. I have ingested the file and placed my ap... See more...
Hello Everyone, I have written props.conf in which i have added the below eval statement Eval-appname="newapp" and other Extract commands are also there. I have ingested the file and placed my app in an indexer for testing .I am unable to see the appname field on GUI but i am able to see the extracted fields extracted by EXTRACT command in the same props.conf. I have checked for app permissions and field permissions as well its set to "ALL Apps" Please help. in getting the calculated fields in GUI
Hello Experts, i would like to import data from Splunk into Google Bigquery. Do you have any experience with this scenario? Many thanks Pat