All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, If I have several splunk clouds and a heavy forwarder on-premise, how can I configure the heavy forwarder to send specific logs to a specific splunk cloud?
We are trying to output computers that appear in index A but not appear in Index B.  We want to ensure computers are being deployed with the correct software on them. Index A ---------------------... See more...
We are trying to output computers that appear in index A but not appear in Index B.  We want to ensure computers are being deployed with the correct software on them. Index A ---------------------------------------------------- | name | objectclass | field 1 | field 2 | field 3 | ---------------------------------------------------- | PC 1   | computer     | x           | x           | x           | | PC 2   | computer     | x           | x           | x           | | PC 3   | computer     | x           | x           | x           | | PC 4   | computer     | x           | x           | x           | | PC 5   | computer     | x           | x           | x           | | PC 6   | computer     | x           | x           | x           | Index B ---------------------------------------------------- | cname | objectclass | field 1 | field 2 | field 3 | ---------------------------------------------------- | PC 1     | computer     | x            | x           | x           | | PC 2     | computer     | x            | x           | x           | | PC 5     | computer     | x            | x           | x           | | PC 6     | computer     | x            | x           | x           |   I need the output to only show PC 3 and 4 so this can be supplied to the proper team to install the application.  I have tried join with type left and outer.  Also I have attempted stats.  However, I believe my logic is wrong. Your help is greatly appreciated.
Hello, We recently installed the Splunk Add-on for Java Management Extensions. We have it working in our test environment but not production. We get this error message: Received event for unconfi... See more...
Hello, We recently installed the Splunk Add-on for Java Management Extensions. We have it working in our test environment but not production. We get this error message: Received event for unconfigured/disabled/deleted index=jmx with source="source::com.sun.messaging.jms.server:type=Connection,subtype=Config,id=2333287862699082496" host="host::[removed]:0" sourcetype="sourcetype::jmx". So far received events from 1 missing index(es) We have a distributed deployment. The index has been created on the indexers. When I login to the Indexers and go to Settings > Data > Indexes, I can see the 'jmx' index. However, if I login to the Management node and go to Settings > Distributed Environment > Indexer Clustering, the 'jmx' index isn't there. As far as I can tell, I've configured Test and Prod identically, so I'm not sure what the issue is. Does anyone have any ideas of what I can check?
Hello Splunkers! I have an issue in grouping multivalued field after extracting fields from nested xml. The sample is as follows,   <WorkstationMetrics xmlns=“xxxxxxxx”> <Work... See more...
Hello Splunkers! I have an issue in grouping multivalued field after extracting fields from nested xml. The sample is as follows,   <WorkstationMetrics xmlns=“xxxxxxxx”> <WorkstationMetricData TypeCode="None" WorkstationID="0"> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="First">15704</SequenceNumberValue> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="Last">15710</SequenceNumberValue> </WorkstationMetricData> <WorkstationMetricData TypeCode="Manual" WorkstationID="03"> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="First">9395</SequenceNumberValue> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="Last">9463</SequenceNumberValue> </WorkstationMetricData> <WorkstationMetricData TypeCode="Manual" WorkstationID="05"> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="First">62</SequenceNumberValue> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="Last">297</SequenceNumberValue> </WorkstationMetricData> </WorkstationMetrics>   I tried with following search query to  extract field. But the fields extracted are multivalued with varying cardinality and hence some of my mvzip commands are not working as expected. Please find below my search query for your reference.   index=... sourcetype=... | spath output=workstationNumber path=WorkstationMetrics.WorkstationMetricData{@WorkstationID} | spath output=sequenceType path=WorkstationMetrics.WorkstationMetricData.SequenceNumberValue{@TypeCode} | spath output=sequenceNumber path=WorkstationMetrics.WorkstationMetricData.SequenceNumberValue | eval consolidate=mvzip(sequenceType,sequenceNumber) | mvexpand consolidate | eval temp=split(consolidate,","), type=mvindex(temp,0), seqno=mvindex(temp,1) | table workstationNumber type seqno   I expect to present this data in following format, could some one  Sl.no WorkstationID TypeCode SequenceNumberValue 1 0 First 15704 2 0 Last 15710 Any help in mapping 2 multivalued fields with varying cardinality would resolve this issue. Or do we need think out of box?
Hi, I am creating a dashboard using data from a CSV with multiple tabs. How can I use inputlookup where the source CSV has multiple tabs? Thanks,
Hi, I need to capture first two line following the word 'ERROR' from the log file. Example 1: 2022-05-20 11:36:22,393 ERROR [com.https.custom.integration.Integration](WFA:346899 - 5678976 triEx... See more...
Hi, I need to capture first two line following the word 'ERROR' from the log file. Example 1: 2022-05-20 11:36:22,393 ERROR [com.https.custom.integration.Integration](WFA:346899 - 5678976 triExecute:671116263 IE=671116263) com.https.platform.smartobject.dataaccess.SmartObjectNotFoundException: No Smart Object with id '-1' exists com.https.platform.smartobject.dataaccess.SmartObjectNotFoundException: No Smart Object with id '-1' exists at com.https.platform.smartobject.dataaccess.SmartObjectDaoImpl.read(SmartObjectDaoImpl.java:131) Example 2: 2022-05-20 11:36:36,804 ERROR [com.https.ws.https](Thread-789) com.https.ws.errors.AccessException: User does not have access to create records in this GUI. [User:- , GUI: triIntegrationInstance] com.https.ws.errors.AccessException: User does not have access to create records in this GUI. [User: ahmuy, wasim- kb4567, GUI: triIntegrationInstance] at com.https.ws.utils.Validator.validateCreateAccess(Validator.java:588) at com.https.ws.WSProcessThread.processRecord(WSProcessThread.java:258) at com.https.ws.WSProcessThread.run(WSProcessThread.java:147) Please help me with regex.
Hello colleagues I have a python file. which I add to Data inputs -> script Set the interval, set up, the file itself works but the initial path starts with /opt/splunk/bin/python.   /bin/py... See more...
Hello colleagues I have a python file. which I add to Data inputs -> script Set the interval, set up, the file itself works but the initial path starts with /opt/splunk/bin/python.   /bin/python  /.../.../file.py help pls!!
I'm wondering what would happen if I ran "splunk btool whatever" from a script that's running as scrpited input. Theoretically - it should be run with whatever user the splunkd.exe is running with so... See more...
I'm wondering what would happen if I ran "splunk btool whatever" from a script that's running as scrpited input. Theoretically - it should be run with whatever user the splunkd.exe is running with so the question is if it will have all the necessary rights and will run unattended properly or will it stop and ask for credentials? The case in question is mostly about windows and spawning a subprocess from a powershell script but a general answer including unix is also welcome
Hi! We are using a dashboard that displays all the indexes and information about those indexes. I would like to have an additional column. The current search uses this:     | inputlookup name... See more...
Hi! We are using a dashboard that displays all the indexes and information about those indexes. I would like to have an additional column. The current search uses this:     | inputlookup name_of_the_lookup | search index=* (index=***) | sort by index | table index, field_A, field_B, field_C, field_D, field_E, field_F, field_G, field_H, field_I, field_J, field_K     What I want to have is an additional column named 'Access',  that says whether it's true or false if the user currently watching that Dashboard has access to that Index. Is there a search that would do this?
I have a slightly atypical environment - an indexer cluster with two search head clusters talking to it. On one of the indexers, while looking for something completely different, I found that I hav... See more...
I have a slightly atypical environment - an indexer cluster with two search head clusters talking to it. On one of the indexers, while looking for something completely different, I found that I have this in logs: ERROR ClusterSlaveControlHandler - Peer <redacted>  will not return any results for this search, because the search head is using an outdated generation (search head gen_id=25238; peer gen_id=25303). This can be caused by the peer re-registering and the search head not yet updating to the latest generation. This should resolve itself shortly. Unfortunately, it didn't "resolve itself shortly". Furthermore, it persists across 75% of my indexers. I searched for this across the Answers boards, but there's not much info about it. I'm not sure where to even start debugging this. That's the main problem I think. To make matters more interesting - Splunk doesn't seem to return bad results or throw errors at me while searching (I do have some "hangs" on search but that's probably due to some yet unresolved storage latency problem). Any ideas where to look from here?
Hi, I have the following case which I can't get around. My search returns something like this: In order to help security analysts with some useful information, from a lookup we are adding MI... See more...
Hi, I have the following case which I can't get around. My search returns something like this: In order to help security analysts with some useful information, from a lookup we are adding MITRE technique ID  and some useful information about how to detect this particular technique. What I wanted to do is to move the multivalue fields from mitre_technique_id column as the separate columns named exactly like the MITRE ID, and in that column I wanted to include the information about the detection tips for that technique. I was trying something with foreach, transpose, mvexpand but with no good results. Does anyone knows how to achieve it? The end result would look like this: Thanks in advance!
I am using imported CSV data to search throughout Splunk and the CSV file defines the column TIME and only includes the year and month in the format YYYY-MM. I am attempting to convert that field int... See more...
I am using imported CSV data to search throughout Splunk and the CSV file defines the column TIME and only includes the year and month in the format YYYY-MM. I am attempting to convert that field into a UTC UNIX timestamp using the strptime() function but have not had any success.  This is an image of the extracted fields with a basic search: These were the searches I used when attempting to use the strptime() function. All of the examples did not work.     index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval TIME=strptime(TIME, "%Y-%m") index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval TIME=TIME."-00:00:00:00", TIME=strptime(TIME, "%Y-%m-%d:%H:%M:%S") index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval my_time=strptime('TIME', "%Y-%m") index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval my_time=strptime(YEAR.MONTH, "%Y-%m")     Additionally, I also tried using the convert command and that didn't work either. Both of the examples below did not work.     index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | convert timeformat="%Y-%m" mktime(TIME) AS NEW_TIME index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval TIME=TIME."-00:00:00:00" | convert timeformat="%Y-%m-%d:%H:%M:%S" mktime(TIME) AS NEW_TIME       Any advice is appreciated, thank you.
Hello, I am facing disk space issue in my Splunk so decided to delete the unwanted data as it is test environment, while running the following command index=malware | delete i am getting the followi... See more...
Hello, I am facing disk space issue in my Splunk so decided to delete the unwanted data as it is test environment, while running the following command index=malware | delete i am getting the following error.   Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. user=admin., concurrency_category="historical", concurrency_context="user_instance-wide", current_concurrency=0, concurrency_limit=5000 and also I can see so many errors on my Splunk as follows   Please help me on this to solve the issues.
Hi Team, I'm looking for a query to compare Splunk ingestion volume between the current date and a week ago i.e compare today's ingestion volume with exact same day a week ago and get the % differe... See more...
Hi Team, I'm looking for a query to compare Splunk ingestion volume between the current date and a week ago i.e compare today's ingestion volume with exact same day a week ago and get the % difference. Please let me know if there are any queries available preferably with REST Services.Thanks    
Hi Team, After upgrading the SSL certificate we are not able to connect to Deployment server from UF. we are getting the below error logs.   DC:DeploymentClient - channel=tenantService/handshak... See more...
Hi Team, After upgrading the SSL certificate we are not able to connect to Deployment server from UF. we are getting the below error logs.   DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected INFO DC:PhonehomeThread - Attempted handshake 1050 times. Will try to re-subscribe to handshake reply  INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected  WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group primary_indexers has been blocked for 12600 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.  INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
I'm in a RHEL8, Splunk 8.2.6 distributed environment with single kvstore on each servers. Can Memory Mapped (MMAP) storage engine and WiredTiger storage engine exist in the same Splunk environment bu... See more...
I'm in a RHEL8, Splunk 8.2.6 distributed environment with single kvstore on each servers. Can Memory Mapped (MMAP) storage engine and WiredTiger storage engine exist in the same Splunk environment but on different servers in single instance? I will like to do the migration but one server at a time over multiple days.
Hello, We have a rather noisy agent that is logging about 19GB of data daily.  How can I filter the following from the inputs.conf?   Process Information:    Process ID: 0x1450  Process N... See more...
Hello, We have a rather noisy agent that is logging about 19GB of data daily.  How can I filter the following from the inputs.conf?   Process Information:    Process ID: 0x1450  Process Name: C:\Program Files\Rapid7\Insight Agent\components\insight_agent\3.1.5.14\ir_agent.exe     Thanks,   Garry
Hello Team @SPL, Was working on some of the development activity, got stuck at some level. We have a scenario where I need to check , on a single day which user had done transactions for more than 3 ... See more...
Hello Team @SPL, Was working on some of the development activity, got stuck at some level. We have a scenario where I need to check , on a single day which user had done transactions for more than 3 different vendors and transfer the output into tabular format. When i perform the distinct count i get the count of the user who had done transaction with 3 vendors on same day | stats dc(Vendor) AS dc_vendor values(Vendor) AS Vendor BY UserID   Need to have output detailed in table 2 Table 1:- Date  UserID Vendor Transactions 10/5/2021 user 1 SAAS(User 1) $$$$$ 10/5/2021 user 2 PAAS(User 1) $$$$$ 10/7/2021 user 3 IAAS $$$$$ 10/8/2021 user 4 AAA $$$$$ 10/9/2021 user 5 CCCC $$$$$ 10/10/2021 user 6 FFFF $$$$$ 10/5/2021 user 7 XXXX (User 1) $$$$$ 10/6/2021 user 8 ZZZZ $$$$$ 10/8/2021 user 9 EEE $$$$$ 10/9/2021 user 10 QQQQ $$$$$   Output Table 2 Date UserID Vendor Transactions 10/5/2021 user 1 SAAS(User 1) $$$$$     AAS(User 1) $$$$$     XXXX (User 1) $$$$$
Not strictly a Splunk question, more  a VMWare vCenter one, but II'm hoping somebody has solved this before me!!! We're working to get the logs from vCenter into Splunk using syslog, Kiwi and the S... See more...
Not strictly a Splunk question, more  a VMWare vCenter one, but II'm hoping somebody has solved this before me!!! We're working to get the logs from vCenter into Splunk using syslog, Kiwi and the Splunk Add-on for vCenter Logs.  We've figured out all the components: configured vCenter correctly, using rsyslog.config set Kiwi up to use Native messages, not add a date and time stamp and we were just about to start the app to fetch the kiwi logs when we found we could not control the severity level in rsyslog. We referred to the help cited - https://www.rsyslog.com/doc/v8-stable/configuration/modules/imfile.html - but this refers to the directive $InputFileSeverity  as being legacy... Regardless of what we set the parameter $InputFileSeverity to it ignores us and sends everything right up to Debug (Level 7). As that more than doubles the log size for no material benefit, I'd like to tell vCenter not to bother.  What is the corect syntax of the stanza in rsyslog.conf to set the severity level to Level 6 / Info or lower?  We tried  $InputFileSeverity 6 $InputFileSeverity Info $InputFileSeverity Info,Warning
Hi All, I have a SPL query that runs on an index , sourcetype which has milions of jobnames. I want to my SPL to read through a list of jobnames from a different query and use it as subsearch  O... See more...
Hi All, I have a SPL query that runs on an index , sourcetype which has milions of jobnames. I want to my SPL to read through a list of jobnames from a different query and use it as subsearch  OR I have created a lookup.csv for this 16,000 list of jonames and want to run my search on it.   How to do that ?   Main SPL that runs on millions of jobnames : earliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_time Below is an input lookup  freq_used_jobs_bmp_3months.csv which is a simple two columnar file  jobName, freq_count     I tried to join main query with this inputfile. I want to operate and write SPL queries on this list of jobNames only.     earliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_time | lookup freq_used_jobs_bmp_3months.csv jobName output freq_count |table jobname, freq_count The above query fails with error  na_prod_secure-ist-indexer-1_iapp724.randolph.ms.com-23000] Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup 'freq_used_jobs_bmp_3months.csv, jobName, output, freq_count'. See search.log for more details I removed any null rows in the file.  Still I get the same error. Other option is to somehow combine, join main query with a sub search instead of a lookup file. main query  earliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_time sub search that will list a smaller number of jobNames that are used in last 3 months : earliest=-90d index="log-13120-prod-c" sourcetype="autosys_service_secondary:app" OR "autosys_service_primary:app" "request:JobSearch" installation="P*" NOT"*%*" | stats count as freq_count by jobName   Now how to join the above two?