All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi guys I'm having a problem my drilldown menu for the panels on my dashboard is disappearing. when I render the dashboard its there but when i try to click it it vanishes.   there are no ... See more...
Hi guys I'm having a problem my drilldown menu for the panels on my dashboard is disappearing. when I render the dashboard its there but when i try to click it it vanishes.   there are no scripts or custom visualizations on the dashboard so I don't understand why this happens. the permissions are also global for the dashboard
I'm trying to display the cumulative sum in the timechart. two sourcetypes  index= _internal | [search sourcetype=source1 clu=* value=* | rename value as source1value] | appendcols [search source... See more...
I'm trying to display the cumulative sum in the timechart. two sourcetypes  index= _internal | [search sourcetype=source1 clu=* value=* | rename value as source1value] | appendcols [search sourcetype=source2 clu=* value=* | rename value as source2value] | table source1value source2value | eval res=source2value-source1value | stats sum(res) up to here giving the sum of res, I need to display this cumulative sum in the time chart. Can anyone suggest how I can achieve this?  
Hello, I have the bellow search: index=test sourcetype=Test |stats count by _time |eventstats perc99(count) as p99 |eval Percentile  = case(count >= p99, “99%”) |stats count by transactions by ... See more...
Hello, I have the bellow search: index=test sourcetype=Test |stats count by _time |eventstats perc99(count) as p99 |eval Percentile  = case(count >= p99, “99%”) |stats count by transactions by percentile  I want to add a column that shows the % of transactions  in the 99% percentile however can’t work out how to do this. Any advice would be greatly appreciated.   Thanks   Joe  
Hello I have a lookup, which contains hostnames, how can I make search over indexes (for example index=*) only by hostnames which in lookup? thank you.
I am collecting data which tests that a server can reach other destinations. This data is collected in the form of source, destination, application name, description and state - did it connect or not... See more...
I am collecting data which tests that a server can reach other destinations. This data is collected in the form of source, destination, application name, description and state - did it connect or not. I would like to use this data in a ITSI Service's KPI in to measure the state of connections from applications to other servers. However in ITSI it seems only possibly to split by one value not two, but I read that it's possibly to create new field combining the two fields together to get around this. I've tried this but I'm getting some unexpected results: I created a base search which creates a new combined field called 'host_application': index=main task=checkopenports | eval host_application = host . "_" . application I added a new entity to the service called host_application and set it to match host123_websiteservice (which is what I want to pick out of the results to match for this service: the server 'host123' and the application 'websiteservice') In the KPI configuration I configured 'Entity Split Field' to host_application and also set 'Entity Filter Field' to host_application, and then set the calculation to 'count' I'm hoping this should split the results by the host_application field and then only show items matching the host_application field set in the entities tab but instead it shows a value of 0 and no entities. Any suggestions as to what I'm doing wrong here? Thanks Eddie
Hi All,  Looking for some help on a new cloud instance we have and to understand it a bit better. 5gb per day. I have a Non IT Sec Background. I have checked the CMC dashboard overview and am sho... See more...
Hi All,  Looking for some help on a new cloud instance we have and to understand it a bit better. 5gb per day. I have a Non IT Sec Background. I have checked the CMC dashboard overview and am showing a 0GB ingest Volume.  Throughput by index is showing values with a max of 11.597kb (last 24 hours) When i go to the CMC dash / License Usage / Ingest - everything is blank and no figures showing ( but search is working) When i go to the CMC dash / Workload - i can see SVC usage at its highest at 5.418 Current Searchable index storage at 16gb _Internal = 11 _metrics = 4 _introspection = 1 Data Ingested Graph = "no results found"  I have so far ingested a small 8kb excel csv file to a lookup table and created a small inputlookup dashboard from here with a few graphs and charts.  I have not set up any UF or any API feeds etc.  Is the CMC not showing my daily usage quota as my files i have imported are so tiny ?  if so is there a way to show the kb/mb and not GBs at this stage?  or a way to show a simple bar graph of daily License 5 g and what total of this i have used per day ? ( i believe i used to have this simple to read info under licence when i had the enterprise, but do not seem to get the same results on cloud)  Thank You All. 
What's the best way to create a base search that will be generic/portable across all clients  that will look over a variable period of 45days to identify which hosts have stop sending logs to Splunk?... See more...
What's the best way to create a base search that will be generic/portable across all clients  that will look over a variable period of 45days to identify which hosts have stop sending logs to Splunk? I want to scale it out for about 20,ooo apps. I want to keep the same SPL and use lookups and macros to filter the requirements for each client but, will not be in the base search.
I've a query which has column like AccountNO eventType _time and difference I'm trying to find the time difference of each eventType(there are 13 eventTypes), I'm following an algorithm and able to... See more...
I've a query which has column like AccountNO eventType _time and difference I'm trying to find the time difference of each eventType(there are 13 eventTypes), I'm following an algorithm and able to get the time difference of these 13 event types. Now my result looks like this AccountNO         eventType                _time                                        difference 123456789       eventType1            1/1/2021:12:00:00                                 eventType2            1/1/2021:12:01:20                                 eventType3            1/1/2021:12:03:00                                 eventType4            1/1/2021:12:04:00                                  eventType5            1/1/2021:12:08:00                                  eventType6            1/1/2021:12:12:00                                   eventType7            1/1/2021:12:13:00                                  eventType8              1/1/2021:12:14:50                                  eventType9              1/1/2021:12:16:00                                 eventType10             1/1/2021:12:18:00                                  eventType11            1/1/2021:12:19:00                                  eventType12             1/1/2021:12:21:30                                   eventType13             1/1/2021:12:23:00 I used eval and formula to get the difference of 13 eventTypes like D1,D2,D3,D4,D5,D6,D7,D8,D9,D10,D11,D12,D13 Now I want to map these D1 to D13 values in difference field/column. So that my result will be like below. I guess it has something to do with CASE Statement but it's not working for me. Please help AccountNO            eventType                  _time                                                 difference 123456789            eventType1              1/1/2021:12:00:00                         00:00                                      eventType2              1/1/2021:12:01:20                         01:20                                      eventType3                1/1/2021:12:03:00                       01:40                                      eventType4                1/1/2021:12:04:00                       01:00                                      eventType5                 1/1/2021:12:08:00                      07:00                                      eventType6                  1/1/2021:12:12:00                      02:00                                      eventType7                   1/1/2021:12:13:00                      03:20                                       eventType8                   1/1/2021:12:14:50                    02:00                                       eventType9                     1/1/2021:12:16:00                    01:00                                       eventType10                    1/1/2021:12:18:00                   02:00                                      eventType11                    1/1/2021:12:19:00                    01:00                                       eventType12                     1/1/2021:12:21:30                   02:00                                       eventType13                     1/1/2021:12:23:00                     04:00
Hi all, I have a dashboard which is comprised of 5 tables. However, sometimes it can get annoying scrolling all the way down. Is there a way that on the top of the dashboard I can have 5 hyperlinks ... See more...
Hi all, I have a dashboard which is comprised of 5 tables. However, sometimes it can get annoying scrolling all the way down. Is there a way that on the top of the dashboard I can have 5 hyperlinks that scroll to a particular section of the dashboard? Would this be possible through giving tables id's?  The desired functionality is much like confluence where you can put anchors throughout the page and create hyperlinks to scroll to particular sections.   Thanks, any help would be greatly appreciated!
We have a requirement to send audit logs from our host servers (/var/log/audit/audit.log) to both our indexers and to a 3rd party host syslog server. I am testing with a host gary-test2.ussl.uhs with... See more...
We have a requirement to send audit logs from our host servers (/var/log/audit/audit.log) to both our indexers and to a 3rd party host syslog server. I am testing with a host gary-test2.ussl.uhs with audit logs in /var/log/audit/audit.log. I have configured the universal forwarder host gary-test2.ussl.uhs to redirect all it's logs to the heavy forwarder. I would like to have the heavy forwarder send it's logs to the indexers but also a copy of all audit events to the syslog server syslogp01.ussl.uhs Here is the architecture involved with the routing. Universal Forwarder gary-test1.ussl.uhs Heavy Forwarder ussl-splkhfwt01.ussl.uhs Indexers splkidxt01.ussl.uhs splkidxt02.ussl.uhs Syslog server syslogp01.ussl.uhs (10.17.8.206) Here is how I configured the Heavy forwarder... splkhfwt01.ussl.uhs (heavy forwarder host) /opt/splunk/etc/apps/forwarder_syslog/local/props.conf [source::/var/log/audit/audit.log] TRANSFORMS-routing=troutingrsa /opt/splunk/etc/apps/forwarder_syslog/local/transforms.conf [troutingrsa] REGEX=. DEST_KEY=_SYSLOG_ROUTING FORMAT=Myroutingrsa /opt/splunk/etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group indexAndForward = 0 [tcpout:default-autolb-group] disabled = false server = splkidxt01.ussl.uhs:9997,ussl-splkidxt02.ussl.uhs:9997 [syslog:Myroutingrsa] server = 10.17.8.206:514 sendCookedData = false type = udp disabled = false What I am seeing is that the /var/log/audit/audit.log logs from host gary-test2.ussl.uhs are appearing in search queries on Splunk. Also those same logs are appearing on the syslog server. Here is the problems I found. Logs other than the audit.log logs from the host gary-test2.ussl.uhs are also appearing on the syslog server. I suspected that the props.conf and the transforms.conf were not doing their job so I remarked out all the settings in props.conf and transforms.conf then restarted splunk. I found that the logs continued to be sent to the syslog server which says the props.conf and transforms.conf files are having no effect. Just to be sure, in the outputs.conf, I removed the "[syslog:Myroutingrsa]" and it's settings. That then made the logs stop forwarding to the syslog server. Does anyone see what is wrong with my forwarding configuration settings?
Hi all, I have a field that has a time value such as (_time field): 2021-08-12 15:18:42 However, when I got to use the rename command on the _time field, it changes the format to: 1628723833 Any... See more...
Hi all, I have a field that has a time value such as (_time field): 2021-08-12 15:18:42 However, when I got to use the rename command on the _time field, it changes the format to: 1628723833 Any assistance in how to NOT make the date format change whilst also renaming the field would be greatly appreciated.
"service.indexes" in splunklib for Python return by default a collection with only event indexes (no metric indexe). Is it a way to get a collection from all metric and event indexes giving the filt... See more...
"service.indexes" in splunklib for Python return by default a collection with only event indexes (no metric indexe). Is it a way to get a collection from all metric and event indexes giving the filter parameter "datatype=all" ?   Thanks
I am looking for a splunk query which can calculate each sourcetype ingesting data in splunk. you can take below sample data for example:- summary_capacity 0.01per GB per month 0.2per GB per month... See more...
I am looking for a splunk query which can calculate each sourcetype ingesting data in splunk. you can take below sample data for example:- summary_capacity 0.01per GB per month 0.2per GB per month   Splunk license is $5 per cpu per day Indexer is 10.15$ per day so what could be the best efficient splunk query for calculating  the cost based on how much data ingest of each sourcetype     
Hello, We have a variety of different AWS logs (i.e. CloudWatch, Cloudtrail, Config, VPC Flow, Aurora) and non-AWS logs (i.e. Palo Alto, Trend Micro) routed to S3 buckets today. There are a total of... See more...
Hello, We have a variety of different AWS logs (i.e. CloudWatch, Cloudtrail, Config, VPC Flow, Aurora) and non-AWS logs (i.e. Palo Alto, Trend Micro) routed to S3 buckets today. There are a total of 15 S3 buckets (5 per AWS account). Upon recently purchasing and configuring an on-premise Splunk ES (distributed deployment w/ index clustering, no SH clustering yet), our goal is to begin forwarding these logs to our Splunk deployment. What are some considerations that we should keep in mind? Since we're going with a push approach, we're planning to do the following - Could someone confirm if this looks right? I'm open to suggestions. Send the logs from the S3 bucket to Amazon Kinesis Firehouse Firehose writes a batch of events to Splunk via HEC. Since indexers are not in an AWS VPC (they reside in a separate Oracle Cloud instance), I'm assuming that an SSL certificate needs to be installed on each indexer? We have 1 index cluster in a Production environment and a separate one in our Disaster Recovery environment. Assign DNS name that resolves to the set of indexers which shall collect data from Kinesis Firehose. Install Splunk Add-on for Amazon Kinesis on Enterprise and ES Search Head, as well as Cluster Master Ensure new index is created for AWS logs (1 sourcetype for each AWS log source) and existing indexes are used for the Palo Alto and Trend Micro logs. If new indexes are needed for Palo Alto and Trend Micro logs, I'm assuming that they would still adhere to the appropriate Splunk ES data models. Configure HEC and create new HEC Token. There will be a unique HEC token per sourcetype. Configure Amazon Kinesis Firehouse to send data to Splunk. Ensure all events are backed up to S3 bucket until it is confirmed that all events are processed by Splunk Search for data by source type to confirm that it is being indexed and visible.
I have the follow query index=index |spath output=traceSteps path=traceSteps{} |table traceSteps |mvexpand traceSteps |rex field=traceSteps "(message\"\:\"(?<mensagem>(?<=\")(.*?)(?=\")))" |wher... See more...
I have the follow query index=index |spath output=traceSteps path=traceSteps{} |table traceSteps |mvexpand traceSteps |rex field=traceSteps "(message\"\:\"(?<mensagem>(?<=\")(.*?)(?=\")))" |where mensagem LIKE "CPF%" |stats count when i change "|stats count" by "|timechart span=1d count" to show by date i have "no results found" Why? What do i make wrong?
Hello,   The question is pretty straightforward. I would like to alert if 3 failed logins followed by 1 successful login from one user is observed. For example: Minute user action 1st min... See more...
Hello,   The question is pretty straightforward. I would like to alert if 3 failed logins followed by 1 successful login from one user is observed. For example: Minute user action 1st minute xyz failure 2nd minute xyz failure 3rd minute xyz failure 4th minute xyz success   If this condition occurs.  I would like to create an alert.  Thanks in advance
I need some help with an alert i have been stuck on. I have a DBCONNECT lookup that returns a value once a day. This value contains 18 IPs at the moment all separated by "," - for example value=1.1.1... See more...
I need some help with an alert i have been stuck on. I have a DBCONNECT lookup that returns a value once a day. This value contains 18 IPs at the moment all separated by "," - for example value=1.1.1.1/24,2.2.2.2, 5.5.5.5/16. I need an search i can create an alert off of if there is an IP added to this compared to when it was last ran. IE - search 1 at 6am had 5 IPs search 2 the next day has 6 IPs - alert. right now i get the all the IPs in one field called "Value=" - looks like the below (ips changed for this post) value="1.526.323.176/2,133.58.35.4/2,10.199.0.99/14 I basically need the alert to send our team an email letting us know an IP has been added and we should look into it.    
We run some reports to list specific filenames that we've received over a period of time. These particular reports are predicated on account and file name matches. Can we create an alert from one of ... See more...
We run some reports to list specific filenames that we've received over a period of time. These particular reports are predicated on account and file name matches. Can we create an alert from one of these reports to identify an account and filename that was NOT received? Please know how it can be done. Thanks. An example of one of the reports below: index=log source="/logs/file_tracking.log" (Accountname IN("Account1") AND Filename IN ("File1*","File2*","File3*","File4*","File5*","File6*","File7*"))  | table Transfer, Account, File, Start_Time, End_Time | sort - Start_Time
We installed DB agent on the database server and it has been reporting to the controller. We would like to manage Health Rules for DB Agents (database availability alerts), but when we try to conf... See more...
We installed DB agent on the database server and it has been reporting to the controller. We would like to manage Health Rules for DB Agents (database availability alerts), but when we try to configure health rules we do not find our database agent names.
I have query something like this:  index=sample source=test (earliest=-1d@d latest=@d) OR (earliest=-2d@d latest=-1d@d) OR (earliest=-3d@d latest=-2d@d) |bin span=.1 Seconds | eval dayOfDate=strft... See more...
I have query something like this:  index=sample source=test (earliest=-1d@d latest=@d) OR (earliest=-2d@d latest=-1d@d) OR (earliest=-3d@d latest=-2d@d) |bin span=.1 Seconds | eval dayOfDate=strftime(_time,"%Y/%m/%d") | stats count by Seconds, dayOfDate | xyseries Seconds dayOfDate count   which displays results, something like this (as an example):  Seconds        8/9      8/10   8/08 0.0-0.1             42         22        33 0.1-0.2              22        32         44 How can i convert the data being shown under 8/8, 8/9 and 8/10 in percentage?