All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi forum! I have a couple of tricky questions on working with same indata and same type of graphs... I am currently working on some Jira ticket weekly inflow/outflow/backlog column graphs, based ... See more...
Hi forum! I have a couple of tricky questions on working with same indata and same type of graphs... I am currently working on some Jira ticket weekly inflow/outflow/backlog column graphs, based on ticket data ingested nightly into Splunk index, from Confluence Rest API into simple CSV data. Essential JIRA fields -> CSV fields used: key (ticket ID), created (datestamp), resolved (datestamp), priority Each day we ingest ALL JIRA tickets for a project into Splunk index (lets say index "project" keyed with source "jira_bugs". Theoretically I should be able to use the last 24 hour ingest into splunk to drive the graphs based on the dates in the data.   Based on that I create a weekly Jira ticket inflow graph, over the last 6 months of data (last 24 hours data):       index="project" source="jira_bugs" | dedup key | eval created_date = strptime(created,"%Y-%m-%d") | eval resolved_date = strptime(resolved,"%Y-%m-%d") | eval _time = created_date | where _time > now()-15811200 | timechart span=1w partial=false count AS "TR inflow" BY priority | addtotals | convert timeformat=""W%y%V"" ctime(_time) AS date | sort _time | fields - _time | table date *       So, based on the ticket's created date, I use that as _time, I span the data in timechart into weekly scope and then change the time label to a Week label Wyyww. The output drives a stacked columns (per priority), and an overlay linegraph for totals. Outflow is dito, but _time is instead driven by resolved_date. The problem with this approach is that if there is no tickets created for a week, it should (but does not) render an empty space for that week. I am thinking that I may perhaps need to chain this query with a preceeding gentimes (7d increment starting 6 months ago?), and then somehow group the count of tickets into the generated time events?   Secondly, I need to create a weekly Jira ticket backlog graph, and this feels even more tricky. For this, I need to count the number of tickets per week that fits within a certain time range, meaning I need to count a ticket for each week if it was open (evaluating if [created_date] < week or certain day] < [resolved_date]). So same ticket shall be counted (duplicated with different _time stamps?) over several weeks (columns in graph) for which it was open. Seems like a simple thing, but each time I attack this problem I give up after googling and testing a number of ideas from Splunk! reference and forum.  
Hello, I have huge volume of data coming in under different source types (or indexes) for different applications/projects. Are there any ways we can assign any indexed fields for each of the data s... See more...
Hello, I have huge volume of data coming in under different source types (or indexes) for different applications/projects. Are there any ways we can assign any indexed fields for each of the data sources/indexes/apps? As an example, most of the cases ACCOUNTID and IPAddress are the unique fields for each of the applications/Projects. How would I assign these 2 fields as indexed fields? Any thoughts or recommendations would be highly appreciated. Thank you so much.
Does anyone know where I can find a copy of the original query for the "Spike in DNS Traffic (Last Day)" panel? This panel is under the Infosec Dashboard, Advanced Threats, Network anomalies section.... See more...
Does anyone know where I can find a copy of the original query for the "Spike in DNS Traffic (Last Day)" panel? This panel is under the Infosec Dashboard, Advanced Threats, Network anomalies section.  We've been altering the search for that panel and need to revert back to original search query. Unfortunately no copy was made prior to making changes.
Hello, we are trying to find a way to import sumo logic data  into Splunk, existing sumo logic is getting replace by Splunk. do we have any document which we can refer or any help would be greatl... See more...
Hello, we are trying to find a way to import sumo logic data  into Splunk, existing sumo logic is getting replace by Splunk. do we have any document which we can refer or any help would be greatly apricated.   thanks   
Currently .conf 2022 isnt available in the Splunk Events app. Is this going to be added? Is there any other way to view the sessions agenda easily? 
Hello everyone, I'd like to update one of the Splunk Apps -- Splunk Common Information Model, from 4.20.2 to 5.0.1 to eliminate its jQuery3.5 incompatibility issue. The platform I'm using is Splunk... See more...
Hello everyone, I'd like to update one of the Splunk Apps -- Splunk Common Information Model, from 4.20.2 to 5.0.1 to eliminate its jQuery3.5 incompatibility issue. The platform I'm using is Splunk Cloud 8.2.2201.1 (Victoria Experience)   Although I can update it to 5.0.1 via Splunk Cloud > Apps > Manage Apps, to my surprise, it rolled back to its previous version 4.20.2 after the Splunk Cloud restart. There is one thing I should mention: after the update, there was immediately a Setup request. Although I didn't know how to set it up, I did change many default configurations -- I even provided it with a "fake" API key. However, it still rolled back to the previous version. Besides, the provided key also disappeared after its rollback.   Any suggestions on how to update it, please? Thank you very much!
Is it possible to ship only specific statements from a log file, to be indexed on Splunk rather than indexing the entire log file? For example, let's say I have a log file that contains some data ab... See more...
Is it possible to ship only specific statements from a log file, to be indexed on Splunk rather than indexing the entire log file? For example, let's say I have a log file that contains some data about a failed process. If there was a 1 line statement in this log file that told me the time that process failed, would it be possible to index that line of data and not the entire log file? how would that be done? Thank you in advance!
Big warning at the beginning - it's not a question of "should I do that", it's not a question of best practices. I'm not going to do something like that in production (and probably not even in lab en... See more...
Big warning at the beginning - it's not a question of "should I do that", it's not a question of best practices. I'm not going to do something like that in production (and probably not even in lab environment). It's purely a theoretical question. As we all know, there is usually a separate port for deployment server, separate one for HEC, separate one for REST calls between SH(s) and indexer(s). I was wondering how much of those functionalities could be squashed into a single port (possibly with a help of an external reverse-proxy). I suppose HEC and DS could be really good candidates to squish together. Any others? Just to make myself absolutely clear - I don't want it to be quick and well-performing. I'm just wondering if it would work at all.
while trying to ingest the logs from log analytics getting below error  ERROR pid=40806 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recen... See more...
while trying to ingest the logs from log analytics getting below error  ERROR pid=40806 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 72, in collect_events response = requests.post(uri,json=search_params,headers=headers) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/api.py", line 110, in post return request('post', url, data=data, json=json, **kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/api.py", line 56, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/sessions.py", line 488, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/sessions.py", line 609, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/adapters.py", line 473, in send raise ConnectionError(err, request=request) ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer'))   TA version is 1.0.3
Hi there, If I have several splunk clouds and a heavy forwarder on-premise, how can I configure the heavy forwarder to send specific logs to a specific splunk cloud?
We are trying to output computers that appear in index A but not appear in Index B.  We want to ensure computers are being deployed with the correct software on them. Index A ---------------------... See more...
We are trying to output computers that appear in index A but not appear in Index B.  We want to ensure computers are being deployed with the correct software on them. Index A ---------------------------------------------------- | name | objectclass | field 1 | field 2 | field 3 | ---------------------------------------------------- | PC 1   | computer     | x           | x           | x           | | PC 2   | computer     | x           | x           | x           | | PC 3   | computer     | x           | x           | x           | | PC 4   | computer     | x           | x           | x           | | PC 5   | computer     | x           | x           | x           | | PC 6   | computer     | x           | x           | x           | Index B ---------------------------------------------------- | cname | objectclass | field 1 | field 2 | field 3 | ---------------------------------------------------- | PC 1     | computer     | x            | x           | x           | | PC 2     | computer     | x            | x           | x           | | PC 5     | computer     | x            | x           | x           | | PC 6     | computer     | x            | x           | x           |   I need the output to only show PC 3 and 4 so this can be supplied to the proper team to install the application.  I have tried join with type left and outer.  Also I have attempted stats.  However, I believe my logic is wrong. Your help is greatly appreciated.
Hello, We recently installed the Splunk Add-on for Java Management Extensions. We have it working in our test environment but not production. We get this error message: Received event for unconfi... See more...
Hello, We recently installed the Splunk Add-on for Java Management Extensions. We have it working in our test environment but not production. We get this error message: Received event for unconfigured/disabled/deleted index=jmx with source="source::com.sun.messaging.jms.server:type=Connection,subtype=Config,id=2333287862699082496" host="host::[removed]:0" sourcetype="sourcetype::jmx". So far received events from 1 missing index(es) We have a distributed deployment. The index has been created on the indexers. When I login to the Indexers and go to Settings > Data > Indexes, I can see the 'jmx' index. However, if I login to the Management node and go to Settings > Distributed Environment > Indexer Clustering, the 'jmx' index isn't there. As far as I can tell, I've configured Test and Prod identically, so I'm not sure what the issue is. Does anyone have any ideas of what I can check?
Hello Splunkers! I have an issue in grouping multivalued field after extracting fields from nested xml. The sample is as follows,   <WorkstationMetrics xmlns=“xxxxxxxx”> <Work... See more...
Hello Splunkers! I have an issue in grouping multivalued field after extracting fields from nested xml. The sample is as follows,   <WorkstationMetrics xmlns=“xxxxxxxx”> <WorkstationMetricData TypeCode="None" WorkstationID="0"> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="First">15704</SequenceNumberValue> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="Last">15710</SequenceNumberValue> </WorkstationMetricData> <WorkstationMetricData TypeCode="Manual" WorkstationID="03"> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="First">9395</SequenceNumberValue> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="Last">9463</SequenceNumberValue> </WorkstationMetricData> <WorkstationMetricData TypeCode="Manual" WorkstationID="05"> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="First">62</SequenceNumberValue> <SequenceNumberValue Timestamp="2022-05-1" TypeCode="Last">297</SequenceNumberValue> </WorkstationMetricData> </WorkstationMetrics>   I tried with following search query to  extract field. But the fields extracted are multivalued with varying cardinality and hence some of my mvzip commands are not working as expected. Please find below my search query for your reference.   index=... sourcetype=... | spath output=workstationNumber path=WorkstationMetrics.WorkstationMetricData{@WorkstationID} | spath output=sequenceType path=WorkstationMetrics.WorkstationMetricData.SequenceNumberValue{@TypeCode} | spath output=sequenceNumber path=WorkstationMetrics.WorkstationMetricData.SequenceNumberValue | eval consolidate=mvzip(sequenceType,sequenceNumber) | mvexpand consolidate | eval temp=split(consolidate,","), type=mvindex(temp,0), seqno=mvindex(temp,1) | table workstationNumber type seqno   I expect to present this data in following format, could some one  Sl.no WorkstationID TypeCode SequenceNumberValue 1 0 First 15704 2 0 Last 15710 Any help in mapping 2 multivalued fields with varying cardinality would resolve this issue. Or do we need think out of box?
Hi, I am creating a dashboard using data from a CSV with multiple tabs. How can I use inputlookup where the source CSV has multiple tabs? Thanks,
Hi, I need to capture first two line following the word 'ERROR' from the log file. Example 1: 2022-05-20 11:36:22,393 ERROR [com.https.custom.integration.Integration](WFA:346899 - 5678976 triEx... See more...
Hi, I need to capture first two line following the word 'ERROR' from the log file. Example 1: 2022-05-20 11:36:22,393 ERROR [com.https.custom.integration.Integration](WFA:346899 - 5678976 triExecute:671116263 IE=671116263) com.https.platform.smartobject.dataaccess.SmartObjectNotFoundException: No Smart Object with id '-1' exists com.https.platform.smartobject.dataaccess.SmartObjectNotFoundException: No Smart Object with id '-1' exists at com.https.platform.smartobject.dataaccess.SmartObjectDaoImpl.read(SmartObjectDaoImpl.java:131) Example 2: 2022-05-20 11:36:36,804 ERROR [com.https.ws.https](Thread-789) com.https.ws.errors.AccessException: User does not have access to create records in this GUI. [User:- , GUI: triIntegrationInstance] com.https.ws.errors.AccessException: User does not have access to create records in this GUI. [User: ahmuy, wasim- kb4567, GUI: triIntegrationInstance] at com.https.ws.utils.Validator.validateCreateAccess(Validator.java:588) at com.https.ws.WSProcessThread.processRecord(WSProcessThread.java:258) at com.https.ws.WSProcessThread.run(WSProcessThread.java:147) Please help me with regex.
Hello colleagues I have a python file. which I add to Data inputs -> script Set the interval, set up, the file itself works but the initial path starts with /opt/splunk/bin/python.   /bin/py... See more...
Hello colleagues I have a python file. which I add to Data inputs -> script Set the interval, set up, the file itself works but the initial path starts with /opt/splunk/bin/python.   /bin/python  /.../.../file.py help pls!!
I'm wondering what would happen if I ran "splunk btool whatever" from a script that's running as scrpited input. Theoretically - it should be run with whatever user the splunkd.exe is running with so... See more...
I'm wondering what would happen if I ran "splunk btool whatever" from a script that's running as scrpited input. Theoretically - it should be run with whatever user the splunkd.exe is running with so the question is if it will have all the necessary rights and will run unattended properly or will it stop and ask for credentials? The case in question is mostly about windows and spawning a subprocess from a powershell script but a general answer including unix is also welcome
Hi! We are using a dashboard that displays all the indexes and information about those indexes. I would like to have an additional column. The current search uses this:     | inputlookup name... See more...
Hi! We are using a dashboard that displays all the indexes and information about those indexes. I would like to have an additional column. The current search uses this:     | inputlookup name_of_the_lookup | search index=* (index=***) | sort by index | table index, field_A, field_B, field_C, field_D, field_E, field_F, field_G, field_H, field_I, field_J, field_K     What I want to have is an additional column named 'Access',  that says whether it's true or false if the user currently watching that Dashboard has access to that Index. Is there a search that would do this?
I have a slightly atypical environment - an indexer cluster with two search head clusters talking to it. On one of the indexers, while looking for something completely different, I found that I hav... See more...
I have a slightly atypical environment - an indexer cluster with two search head clusters talking to it. On one of the indexers, while looking for something completely different, I found that I have this in logs: ERROR ClusterSlaveControlHandler - Peer <redacted>  will not return any results for this search, because the search head is using an outdated generation (search head gen_id=25238; peer gen_id=25303). This can be caused by the peer re-registering and the search head not yet updating to the latest generation. This should resolve itself shortly. Unfortunately, it didn't "resolve itself shortly". Furthermore, it persists across 75% of my indexers. I searched for this across the Answers boards, but there's not much info about it. I'm not sure where to even start debugging this. That's the main problem I think. To make matters more interesting - Splunk doesn't seem to return bad results or throw errors at me while searching (I do have some "hangs" on search but that's probably due to some yet unresolved storage latency problem). Any ideas where to look from here?
Hi, I have the following case which I can't get around. My search returns something like this: In order to help security analysts with some useful information, from a lookup we are adding MI... See more...
Hi, I have the following case which I can't get around. My search returns something like this: In order to help security analysts with some useful information, from a lookup we are adding MITRE technique ID  and some useful information about how to detect this particular technique. What I wanted to do is to move the multivalue fields from mitre_technique_id column as the separate columns named exactly like the MITRE ID, and in that column I wanted to include the information about the detection tips for that technique. I was trying something with foreach, transpose, mvexpand but with no good results. Does anyone knows how to achieve it? The end result would look like this: Thanks in advance!