All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

the code to keep track of what messages it has processed in splunk does not work with Splunk 7 but a simple workaround is to add a line around line 678 of get_imap_email.py (search for \Deleted an... See more...
the code to keep track of what messages it has processed in splunk does not work with Splunk 7 but a simple workaround is to add a line around line 678 of get_imap_email.py (search for \Deleted and add the line outside of the if condition) M.store(num, '+Flags', '(\Flagged)') This will flag each message (the Important flag in outlook/exchange) and then you can search for UNFLAGGED in your imap.conf (or UNDELETED UNFLAGGED if you want to be a bit more careful) As each message is processed, it will get flagged on the IMAP server, and not processed again. This will let you have two copies running on different heavy forwarders for redundancy (although there is some chance that both copies will process the same messages at the same time and duplicate them, but it's unlikely)
Hi, I have dynamic users in the input type which i am populating in multiselect Now i want to give the user the option to choose from AND or OR as sometimes the user want to see values for userA ... See more...
Hi, I have dynamic users in the input type which i am populating in multiselect Now i want to give the user the option to choose from AND or OR as sometimes the user want to see values for userA AND userB and sometimes userA OR userB Is it possible. As of now i know how to user either one option. How to include both options ?
Ok so i am grabbing watt readings every 10 or so seconds, but not EVERY 10 seconds, maybe its some samples are 5 seconds apart, some are 13, most are 10, long story short i have about 310-380 watt sa... See more...
Ok so i am grabbing watt readings every 10 or so seconds, but not EVERY 10 seconds, maybe its some samples are 5 seconds apart, some are 13, most are 10, long story short i have about 310-380 watt samples. so those samples can be any number from 0-10,000watts. With me so far? Now i need to take whatever quantity of samples I have within the hour, and figure out my KWH cost. that cost is 0.095 (9,5 cents) per kWh. I dont know how to do that. I dont want to average stuff too much, I want the kwh price to be as accurate as possible. I will be using this calculation in a splunk dashboard, so if anyone could help i would greatly appreciate it.
When trying to deploy a new cluster bundle I get errors with this verbage:[Not Critical]No spec file for: E:\Program Files\Splunk\etc\master-apps\SA-Hydra\default\hydra_gateway.conf There are se... See more...
When trying to deploy a new cluster bundle I get errors with this verbage:[Not Critical]No spec file for: E:\Program Files\Splunk\etc\master-apps\SA-Hydra\default\hydra_gateway.conf There are several different apps that get this message! I am running on windows 2012 R2.
Hi there, I have a query. I am trying to install Splunk Add on for BOX. I looked it up on Splunk website and it states that it is not compatible with version 7.3.3 of Splunk. My queries: 1. Does... See more...
Hi there, I have a query. I am trying to install Splunk Add on for BOX. I looked it up on Splunk website and it states that it is not compatible with version 7.3.3 of Splunk. My queries: 1. Does anyone know the reason ? 2. Is there an alternative or if there is another add-on that can be leveraged for capturing BOX logs ? 3. If yes, how and what needs to be changed ? Thank you!
Can you also publish as how to generate bearer token and get room id
How would we interpret the "Deployment-Wide Averaged System Load Average" Monitoring console indexing metrics? When would you raise a flag as far as the indexers are being driven too hard using this... See more...
How would we interpret the "Deployment-Wide Averaged System Load Average" Monitoring console indexing metrics? When would you raise a flag as far as the indexers are being driven too hard using this metric and we should increase our capacity?
How do I find which version of Phantom I'm running from the console/ssh? (Captured question from Phantom Community Slack)
I have 3 webservers which takes the traffic and that is load balanced with least connection based without any sticky sessions, so the traffic will be evenly loaded b/w these servers. looking to creat... See more...
I have 3 webservers which takes the traffic and that is load balanced with least connection based without any sticky sessions, so the traffic will be evenly loaded b/w these servers. looking to create alert if any of the host have less event count comparatively. have the below basic query which will look for specific event on all 3 access logs. we can get alert if there is no event by adding | search eventCount=0 but i need to get alert comparing to other host for example x server has 25 events and other server has 100 events which is above my threshold (75% difference). this will help me trouble shot the LB or may the process is X server is taking longer time to respond or something. index=x AND (host="x" OR host="y" OR host="z" ) AND source="*access" AND "xyz.com" | search ResponseCode=200 | inputlookup append=t apache_httpd.csv | stats count as eventCount by host apache_httpd.csv is nothing but as below host x y z
Any idea why the JWT Input wouldn't show up after install?
I am attempting to parse logs that contain fields similar to the example below. Field name being ValidFilterColumns, which contains an json format of these objects containing key/value pairs for Id a... See more...
I am attempting to parse logs that contain fields similar to the example below. Field name being ValidFilterColumns, which contains an json format of these objects containing key/value pairs for Id and Name. ValidFilterColumns="[{"Id":"124","Name":"OrderId"},{"Id":"25","Name":"AssetClass"},{"Id":"123","Name":"Custodian"},{"Id":"13","Name":"Country"},{"Id":"1","Name":"Symbol"}]" My question is, how could I compose a RegEx that parses out the Ids and Names of each object in the array? Also as a note: I have tried making the fields extracted fields, but they will only function properly if another log's field contains the same number of objects.
I have the following code that shows leases that end in June. | inputlookup Leases.csv | rename "Lease End" as leaseEnd | eval timestamp=strptime(leaseEnd, "%Y-%m-%d") | eval Day=strftime(tim... See more...
I have the following code that shows leases that end in June. | inputlookup Leases.csv | rename "Lease End" as leaseEnd | eval timestamp=strptime(leaseEnd, "%Y-%m-%d") | eval Day=strftime(timestamp,"%d"), Month=strftime(timestamp,"%m"), Year=strftime(timestamp,"%Y") | where Year = 2020 AND Month = 6 The requirement is to update the query to return leases ending by quarter i.e. Q1, Q2, Q3, Q4 2020 and display in a bar chart by quarter. How can I do that? I don't know how to aggregate the data by quarter.
0 I'm using the AWS CLI to get some Kinesis metrics - part of that I'm able to specify the output format as one of the below: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.ht... See more...
0 I'm using the AWS CLI to get some Kinesis metrics - part of that I'm able to specify the output format as one of the below: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration-format I've tried TEXT as that seems the most reasonable for splunk but I think the line separated data is messing up splunks ingest: METRICDATARESULTS iteratorAgeMilliseconds itagemillis PartialData METRICDATARESULTS readProvisionedThroughputExceeded itagemillis PartialData TIMESTAMPS 2020-04-15T20:21:00+00:00 TIMESTAMPS 2020-04-15T20:20:00+00:00 TIMESTAMPS 2020-04-15T20:19:00+00:00 TIMESTAMPS 2020-04-15T20:18:00+00:00 TIMESTAMPS 2020-04-15T20:17:00+00:00 TIMESTAMPS 2020-04-15T20:16:00+00:00 VALUES 0.0 VALUES 0.0 VALUES 0.0 VALUES 0.0 VALUES 0.0 VALUES 0.0 METRICDATARESULTS writeProvisionedThroughputExceeded itagemillis PartialData TIMESTAMPS 2020-04-15T19:36:00+00:00 TIMESTAMPS 2020-04-15T19:35:00+00:00 TIMESTAMPS 2020-04-15T19:34:00+00:00 TIMESTAMPS 2020-04-15T19:33:00+00:00 VALUES 0.0 VALUES 0.0 VALUES 0.0 VALUES 0.0 VALUES 0.0 VALUES 0.0 Any thoughts on either the AWS or splunk side on how best to handle ingesting this data ?
I have a field that I know is an indexed field because I can specify on my search myfield::somevalue and get results. After reading some of the documentation and other questions on the forum I would... See more...
I have a field that I know is an indexed field because I can specify on my search myfield::somevalue and get results. After reading some of the documentation and other questions on the forum I would expect that I should alternatively be able to specify myfield=somevalue and get the same results albeit not as efficient however I am not seeing that. The results I get when I use = is a subset of the results I get when I use the indexed ::. This has an impact on my ability to use this field as filter with a subsearch as it appears from my results that the subsearch is using the = form instead of ::. So I really have 2 questions: 1) Why am I seeing different results with = vs :: ? 2) Can you use a subsearch with the indexed :: form? such as index=myindex sourcetype=mysource [search index=anotherindex somefield=somevalue | table anotherfield] Say the subsearch returns a value like 1452 my search seems to be index=myindex sourcetype=mysource anotherfield=1452 Is there a way to make that index=myindex sourcetype=mysource anotherfield::1452 with the subsearch?
I am working in an environment where there are several different constituencies. Each has different needs in terms of apps, data? I believe that it would be much easier for a user in a group to be ... See more...
I am working in an environment where there are several different constituencies. Each has different needs in terms of apps, data? I believe that it would be much easier for a user in a group to be presented with his/her custom environment, at login time, without having to share apps, such as search with everybody else. It appears as if the whole Splunk system is built just for the group. Can you point me to the relevant documentation? Thank you.
I have a dashboard with three radio buttons to select the "Style" of all graphs on my dashboard: "Relative breakdown": An area chart that's stacked 100%, and shows the "other" series "Absolute... See more...
I have a dashboard with three radio buttons to select the "Style" of all graphs on my dashboard: "Relative breakdown": An area chart that's stacked 100%, and shows the "other" series "Absolute breakdown": An area chart that's stacked, and shows the "other" series "Raw comparison": A line chart that's not stacked, and hides the "other" series They're defined like so: <label>Style</label> <choice value="style_relative">Relative breakdown</choice> <choice value="style_absolute">Absolute breakdown</choice> <choice value="style_compare">Raw comparison</choice> <change> <condition label="Relative breakdown"> <set token="chartType">area</set> <set token="stackMode">stacked100</set> <set token="useother">t</set> </condition> <condition label="Absolute breakdown"> <set token="chartType">area</set> <set token="stackMode">stacked</set> <set token="useother">t</set> </condition> <condition label="Raw comparison"> <set token="chartType">line</set> <set token="stackMode">default</set> <set token="useother">f</set> </condition> </change> Is it possible to set useother=t on my timechart , and then only/show hide it in the visualization layer? That way toggling doesn't require a full re-run of the search?
Trying to find the discrepancy between what my LDAP user lookup is reporting and what my user count in AD is. Finding the search that builds that lookup is a bit tricky. Anyone know which macro... See more...
Trying to find the discrepancy between what my LDAP user lookup is reporting and what my user count in AD is. Finding the search that builds that lookup is a bit tricky. Anyone know which macro builds that lookup table? Thanks!
I am able to change color and highlight certain table rows in a dashboard panel using CSS, Javascript. Can we do it in a scheduled report? Please let me know. My table will have below columns, indi... See more...
I am able to change color and highlight certain table rows in a dashboard panel using CSS, Javascript. Can we do it in a scheduled report? Please let me know. My table will have below columns, indicating what is the status of each job: JobName JobStartTime JobEndTime JobStatus If the JobStatus is "COMPLETE", the row needs to be in green. Appreciate any pointers on where to begin with.
Hi everyone, I could really use some input from you all. I am using Splunk cloud in my environment, with a deployment server on-prem for universal forwarders. Two days ago, I stopped receiving da... See more...
Hi everyone, I could really use some input from you all. I am using Splunk cloud in my environment, with a deployment server on-prem for universal forwarders. Two days ago, I stopped receiving data in six indexes. The data retrieved from the indexes originates from a Syslog server. Steps I have taken so far: Verify logs are currently being created in Syslog from the sources Verify Syslog server can still reach deployment server via ping Verify Splunkd is running on the Syslog server Verify deployment server has received a recent phone home from the Syslog server Verify data from other universal forwarders is searchable on the Search head
I am having a issue tracker for tracking all opened issues and the query for the same is below: search issue_status=open| timechart span=1d count(issue_id) by issue_category | addtotals I n... See more...
I am having a issue tracker for tracking all opened issues and the query for the same is below: search issue_status=open| timechart span=1d count(issue_id) by issue_category | addtotals I need to see addtotals as count of total open issues for the current day and with the sum from previous days ie. each day it should show total count of all open issues so far (today's + sum till yesterday's). Could you please help me?