All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm creating a report with the following search that runs each month covering the past 3 months of data. It works and I can display the results in a bar chart but it gets sorted alphabetically ... See more...
Hi, I'm creating a report with the following search that runs each month covering the past 3 months of data. It works and I can display the results in a bar chart but it gets sorted alphabetically by sourcetype.    index=* | timechart span=1mon count by sourcetype | eval _time = strftime(_time,"%B") | rename _time as Time | fields - _* | transpose header_field=Time column_name="sourcetype"   I want to sort it by count of last month. So for example if I run the report in July I get columns "sourcetype", "April", "May", "June". Each month that I run the report the column names will change. I can get the results I want this month  by adding:   | sort - "June"   How can I set this up automatically  so that the results are sorted by the last column (previous month)?
I have a batch file in the jar directory of a TA-app on all my forwarders.  The batch file has the following structure:   process1 > D:\process1_stats.txt process2 > D:\process2_stats.txt   ... See more...
I have a batch file in the jar directory of a TA-app on all my forwarders.  The batch file has the following structure:   process1 > D:\process1_stats.txt process2 > D:\process2_stats.txt   How would I configure this in inputs.conf of the TA-app to run this batch file every X seconds? I've tried using a script stanza, but that simply echoes what is put into the Cmd prompt and the .txt files are getting populated.
I'm running Splunk Enterprise 8.0.4.1 on Ubuntu 20.04LTS single user instance. I am using an Enterprise dev/test license (single user) for this instance. Any attempt to send email results in the foll... See more...
I'm running Splunk Enterprise 8.0.4.1 on Ubuntu 20.04LTS single user instance. I am using an Enterprise dev/test license (single user) for this instance. Any attempt to send email results in the following in python.log: 2020-07-07 21:45:15,136 +0000 ERROR sendemail:1435 - [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json Traceback (most recent call last): File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 1428, in <module> results = sendEmail(results, settings, keywords, argvals) File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 261, in sendEmail responseHeaders, responseBody = simpleRequest(uri, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey) File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 577, in simpleRequest raise splunk.ResourceNotFound(uri) ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json The mailserver is on the local host, TLS isn't in use. I've verified that the Splunk user can send email both via the mail command as well as calling sendmail directly.  The error happens if I use an alert with email as well as using |sendemail from a search.    As this instance is limited to a single user the user that this runs under has the admin role. For logging preferences, I set EmailSender to DEBUG but I'm not seeing anything useful.
Details: ServiceNow Kingston/Splunk 7.3.2 (on-prem) integration was working perfectly with the ServiceNow Add-On 5.0.1. The only mods I made to the OOTB configuration was to add the proper certs to... See more...
Details: ServiceNow Kingston/Splunk 7.3.2 (on-prem) integration was working perfectly with the ServiceNow Add-On 5.0.1. The only mods I made to the OOTB configuration was to add the proper certs to the HTTP object in the Splunk account validation script via http.add_certificate(foo bar) and add the SNOW cert chain to cacerts.txt within the add-on.  Upon upgrading to 8.0.2.1 the integration is no longer functioning. When I try to connect to SNOW I'm getting SSL certificate errors, even though the setup I used for 7.3.2 is the same. Just curious if anyone has had the same or similar issues with upgrade Splunk to 8.x??
Here's an example data in splunk (bookstore logs): time(ms) id stage payload 1020984 aaaa-bbbb-cccc checkout Lord Of The Rings; 1310953 aaaa-bbbb-cccc cart Harry Potter;Game Of Thr... See more...
Here's an example data in splunk (bookstore logs): time(ms) id stage payload 1020984 aaaa-bbbb-cccc checkout Lord Of The Rings; 1310953 aaaa-bbbb-cccc cart Harry Potter;Game Of Thrones; 1340932 aaaa-bbbb-cccc cart Harry Potter; 1345608 dddd-eeee-ffff cart Splunk for Dummies; 1352093 dddd-eeee-ffff cart Splunk for Dummies;Java 101; 1420838 dddd-eeee-ffff checkout Order#999999999 1450928 aaaa-bbbb-cccc checkout Order #123456789 This shows 2 customers shopping for books and then buying them. The most recent cart row contains what they bought. It also contains a previous checkout from one of the customers. I want to create a query that will return this: time(ms) id stage payload time_spent_browsing 1420838 dddd-eeee-ffff checkout Splunk for Dummies;Java 101; 6485 1450928 aaaa-bbbb-cccc checkout Harry Potter; 29979   The payload field should contain the most recent cart row's payload with matching id The time_spent_browsing field should be the the (most recent cart's time - the earliest cart time). The earliest cart event should be after the previous thank you event. I hope that makes sense.
Hi, I am using a batch input to ingest some huge files with a single line events that do not have a timestamp. I have used the DATETIME_CONFIG = CURRENT config but found the following error.: ... See more...
Hi, I am using a batch input to ingest some huge files with a single line events that do not have a timestamp. I have used the DATETIME_CONFIG = CURRENT config but found the following error.: WARN AggregatorMiningProcessor - Too many events (100K) with the same timestamp: incrementing timestamps 1 second(s) into the future to insure retrievability . Which is more effective NONE or CURRENT and what can be used in my case?  Thanks  
Hello,  I am trying to use iplocation to search for instances of a specific city or region for example:  * iplocation ipaddress Region="region" Instead of returning that specific region it wil... See more...
Hello,  I am trying to use iplocation to search for instances of a specific city or region for example:  * iplocation ipaddress Region="region" Instead of returning that specific region it will return all regions. Can anyone tell me if this is a bug or am I missing something?  Thanks 
Upgrade from 3.2.0 to 3.3.1 Splunk 8.0.2.1. Had some trouble doing the upgrade, tried uploading from file. Got a message about files being in-use. Disabled the app, upgraded, enabled, multiple resta... See more...
Upgrade from 3.2.0 to 3.3.1 Splunk 8.0.2.1. Had some trouble doing the upgrade, tried uploading from file. Got a message about files being in-use. Disabled the app, upgraded, enabled, multiple restarts. AttributeError: module 'dbx2' has no attribute 'dbx_logging_formatter' Restarted Windows, no joy. Clean install, similar results: 2020-07-08 13:46:17,198 -0500 ERROR __init__:164 - The REST handler module "dbx_rh_proxy" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2020-07-08 13:46:17,198 ERROR The REST handler module "dbx_rh_proxy" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2020-07-08 13:46:17,198 -0500 ERROR __init__:165 - module 'os' has no attribute 'unsetenv' Clean install of 3.2.0 07-08-2020 13:52:18.071 -0500 ERROR ModularInputs - Unable to initialize modular input "server" defined in the app "splunk_app_db_connect": Introspecting scheme=server: script running failed (exited with code 2).. Turns out dbx_settings.conf was missing taskServerPort. Even though it was defined in the web UI, it wasn't in the config file. Manually adding it and restarting Splunk allowed it to start. Then I just had to retype in my identity passwords (otherwise BouncyCastle crypto errors).
Hi, I have data going to my indexers and also selective data going though a HF off to a 3rd party via Syslog. I know splunk works off tubes and if one stops it blocks and queue's back up the chain.... See more...
Hi, I have data going to my indexers and also selective data going though a HF off to a 3rd party via Syslog. I know splunk works off tubes and if one stops it blocks and queue's back up the chain. Is there any one done something clever basically they have internet issues it stops any data on that HF going to my indexer. The only think I can think of doing is having another HF that proxy's the data and I set a large queue size and say drop data after so long. There for technically never causing a block on our side.
Question : I am trying to determine if it's possible to exclude selected columns of data from algorithm processing when running a search analysis in the search and reporting window.  This would equa... See more...
Question : I am trying to determine if it's possible to exclude selected columns of data from algorithm processing when running a search analysis in the search and reporting window.  This would equate to using a python pandas dataframe and selecting the features you desire from the dataset to be processed or considered in the algorithm. Example index=firewall action="allowed" (host="myhost*") transport="tcp" -- Assumption: my data has 10 columns, but I only want to use 6 of them in the algorithm. -- Problem:  Filter the columns to be used when executing the 1CSVM Algorithm. By default I believe Splunk is assuming I want to analyze all columns as features. `comment("Fit Using 1CSVM Algorithm")` | fit OneClassSVM * kernel="rbf" gamma=1 nu=.0001 shrinking=False | outputlookup compositeResults.csv append=true  
Hi, Why can't I select the Data Model "Inventory" when I try to map custom data in Add-on builder? I am only allowed to select one or more of it's child datasets. This is not the case with other Dat... See more...
Hi, Why can't I select the Data Model "Inventory" when I try to map custom data in Add-on builder? I am only allowed to select one or more of it's child datasets. This is not the case with other Data Models, such as Databases, Endpoint, Malware, etc... There I can select not only the child nodes, but also the top-level dataset. What is the difference?   Any ideas how to approach this?
What is a way I can confirm that a splunk server is doing INDEXING?
I have a Splunk Deployment Server that pull the apps to UF. I have create an app WinPerfmon and inside of inputs.conf: [perfmon://LogicalDisk] counters = % Free Space; Free Megabytes disabled = 0 ... See more...
I have a Splunk Deployment Server that pull the apps to UF. I have create an app WinPerfmon and inside of inputs.conf: [perfmon://LogicalDisk] counters = % Free Space; Free Megabytes disabled = 0 instances = * interval = 10 object = LogicalDisk useEnglishOnly=true ## Memory [perfmon://Memory] counters = Available MBytes disabled = 0 interval = 10 object = Memory useEnglishOnly=true The app is created on UF but splunk-perfmon.exe is running one second and after is closed and not send any data to the indexer.  In splunkd.log: 07-08-2020 16:57:32.423 +0200 DEBUG ExecProcessor - Running: "C:\Program Files\HomeOffSec\bin\splunk-perfmon.exe" on PipelineSet 0 07-08-2020 16:57:32.423 +0200 DEBUG ExecProcessor - PipelineSet 0: Created new ExecedCommandPipe for ""C:\Program Files\HomeOffSec\bin\splunk-perfmon.exe"", uniqueId=5 07-08-2020 16:57:32.423 +0200 DEBUG QueueManager - Failed to parse memory queueSize for path=perfmon and conf=inputs. 07-08-2020 16:57:32.423 +0200 DEBUG QueueManager - Failed to parse queueSize for path=perfmon and conf=inputs. 07-08-2020 16:57:32.423 +0200 DEBUG QueueManager - Memory queueSize for path=perfmonand conf=inputs and queueName=execProcessorInternalQ set to 512000. I have other app WinEventlog and splunk-wineventlog.exe is working. UF has been installed as Windows local admin user.  Could any help me please? Should I do something else in Windows?
Hi,  I’m trying to get product count for yesterday and 7 days ago from yesterday in two separate fields, results are coming back correct for yesterday but for the second field all the results are ze... See more...
Hi,  I’m trying to get product count for yesterday and 7 days ago from yesterday in two separate fields, results are coming back correct for yesterday but for the second field all the results are zero. I wanted to know if my logic is correct.   Here is what I have: index = something host = something | where ResponseCode = “Success” | stats count as “Product Count Yesterday”, count (eval (relative_time(now(), “-8d@d”))) as “Product Count 7 days ago” by product | sort product desc   Thank you. 
hi all, I need to upgrade the universal forwarder on a windows server.  1. Can I just download the latest version of the universal forwarder and run it on top of the current installation or would... See more...
hi all, I need to upgrade the universal forwarder on a windows server.  1. Can I just download the latest version of the universal forwarder and run it on top of the current installation or would it be better to uninstall the current first?  2.Is there a way to initiate an upgrade from the current version?
Hi All, I appreciate that there are tons of answers on this but I am having issues getting it to work! I have a csv named known-ip-addresses.csv it contains the same fields as those in the indexed ... See more...
Hi All, I appreciate that there are tons of answers on this but I am having issues getting it to work! I have a csv named known-ip-addresses.csv it contains the same fields as those in the indexed data eventName, src, "user.Identity.arn" in exactly the same case and separated. The inputlookup works ok and I can search against values. I have not created a lookup definition In the indexed data we have a sourcetype with the same fields, I am trying to find any ip's (src field) that are not in the inputlookup. sourcetype=aws:cloudtrail eventName=ConsoleLogin NOT [inputlookup known-ip-addresses.csv | fields eventName, src, "user.Identity.arn" ] The result is that I am getting a mix of addresses that are in the csv as well as those that are not. Can anyone point me in the right direction? Thanks in advance.
Hey guys, I'm configuering indexer cluster, so I'm gonna have like this: sh1+sh2 ix1+ix2+ix3_master (indexer cluster) 1. How should I configure DB input from our Oracle DB to the indexer cluster?... See more...
Hey guys, I'm configuering indexer cluster, so I'm gonna have like this: sh1+sh2 ix1+ix2+ix3_master (indexer cluster) 1. How should I configure DB input from our Oracle DB to the indexer cluster? 2. And what speed of data replication will I have? Just in general. Some specifics: CentOS Linux, about 8 CPU and 16 Gb of RAM each ix node.  
Hi guys I ask for help for that. I tried to search according to the query below: index = ott sourcetype = drm_license | junction type = internal userId [index = ott  sourcetype = drm_user_retur... See more...
Hi guys I ask for help for that. I tried to search according to the query below: index = ott sourcetype = drm_license | junction type = internal userId [index = ott  sourcetype = drm_user_return] The userId field has 128 characters (numbers and letters). I know that the sub-survey is heavy and will hurt performance, but this survey will be run once a month, so the performance will not be impacting for me. The search does not match all (drm_license and drm_user-return) even if the fields have equal values. Example: 1000 events match 600, even though the fields of the other 400 have equal values.
Hi, I have 6 indexers and when one or two has gone down it moans about it and blocks traffic for a few mins then when it can lb to working one it continues.. Is there a way of setting a don't use th... See more...
Hi, I have 6 indexers and when one or two has gone down it moans about it and blocks traffic for a few mins then when it can lb to working one it continues.. Is there a way of setting a don't use this indexer for x amount of time? The spec file is covers so many options.... I just want to know what others do?
I have a couple .txt files that I want to parse differently than the rest of my data coming in from my forwarders. How could I change the props.conf (Or any other relevant config file) to parse thro... See more...
I have a couple .txt files that I want to parse differently than the rest of my data coming in from my forwarders. How could I change the props.conf (Or any other relevant config file) to parse through this specific sourcetype/input differently? (Ex. Turn off breaks before dates, etc.) Additionally, would I be able to do this on a forwarder/deployment-app level, or would I have to do this all on the $SPLUNK_HOME/etc/system/local level on the main Splunk instance server.