All Topics

Top

All Topics

Hello Is it possible to style the status_indicator.status_indicator_app in a manner like we can for the "single value" chart? Can code similar to this be used? <html> <style> #test { font-size... See more...
Hello Is it possible to style the status_indicator.status_indicator_app in a manner like we can for the "single value" chart? Can code similar to this be used? <html> <style> #test { font-size: 18px !important; font-weight: bold !important; font-color: red; } .single-value .single-result { font-size: 30px !important; } </style> </html>   Thanks, eholz1
We are ingesting Firepower logs via syslog using the cisco:asa TA. Many of the events I am interested in are Threat Defense events that are tied to an ID like this FTD-6-430002. When I narrow down my... See more...
We are ingesting Firepower logs via syslog using the cisco:asa TA. Many of the events I am interested in are Threat Defense events that are tied to an ID like this FTD-6-430002. When I narrow down my search to events with just that ID I find the rest of the event has plenty of info in key:value pairs but no fields have been extracted from the pairs. Sanitized example event: Mar 3 16:01:21 172.16.51.72 Mar 03 2023 22:01:21 firepower : %FTD-6-430002: EventPriority: Low, DeviceUUID: 00000-0000-0000-000000000000, InstanceID: 1, FirstPacketSecond: 2023-03-03T22:01:21Z, ConnectionID: 5000, AccessControlRuleAction: Allow, SrcIP: 100.100.100.100, DstIP: 200.200.200.200, SrcPort: 60000, DstPort: 10, Protocol: tcp Is there a regex command that can dynamically extract all the field names from something like "DstPort: 10" to Field Name of DstPort with a value of 10?  I know Cisco provides a eStreamer TA that may extract these fields but it looks very involved to setup and I already have the syslog configured. 
Can someone tell me how to use the line breaker parameter fo the below events which is currently getting clustered together . {"sdsf shcdzvdvv zdcnvdzvvdd."} {bfdf  dvdfd  dfdfdf ddgvdvdfd."} {... See more...
Can someone tell me how to use the line breaker parameter fo the below events which is currently getting clustered together . {"sdsf shcdzvdvv zdcnvdzvvdd."} {bfdf  dvdfd  dfdfdf ddgvdvdfd."} {dfdfd dfdfd dgdgd dgdgdgdgg."}   Props .conf LINE_BREAKER = (\.\"\})
index=acs-app-log   sourcetype=iccim_bwm_servicename  processname=response_AM |stats count by verificationstatus Results: verificationstatus count Failed 100 Success 230... See more...
index=acs-app-log   sourcetype=iccim_bwm_servicename  processname=response_AM |stats count by verificationstatus Results: verificationstatus count Failed 100 Success 230 pending 456 i need to get the percentage of failed success and pending. how can i do that?
I have two fields: Network_Address and Netmask. The Network_Address field has the network address of the network as field values and the Netmask field has the network mask as it's value. Here is an e... See more...
I have two fields: Network_Address and Netmask. The Network_Address field has the network address of the network as field values and the Netmask field has the network mask as it's value. Here is an example: Network_Address Netmask 10.1.1.0                       255.255.255.0 How can I write a search so Splunk tells me the CIDR subnet rage for the two fields? I need the output to be put in a new field named CIDR. 
the output in splunk console: 3/3/23 2:05:41.000 AM 03/03/2023 02:05:41 p.m. 14664 5046661 Note that the splunk _time is pulling the timestamp from _raw, but not interpreting the "p.m." so spl... See more...
the output in splunk console: 3/3/23 2:05:41.000 AM 03/03/2023 02:05:41 p.m. 14664 5046661 Note that the splunk _time is pulling the timestamp from _raw, but not interpreting the "p.m." so splunk is posting the time of the event as 2:05 AM.  I have have tried a few different combinations for the TIME_FORMAT in the props.conf file, and nothing is helping. here is the current TIME_FORMAT stanza [###_###_###_#######] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 26 TIME_FORMAT = %d/%m/%Y %I:%M:%S TIME_PREFIX = ^ category = Custom disabled = false pulldown_type = true EXTRACT-total_processing_time = ^[^\t\n]*\t(?P<total_processing_time>\d+\t) EXTRACT-application_id = ^(?:[^\t\n]*\t){2}(?P<application_id>.+) current TIME_FORMAT TIME_FORMAT = %d/%m/%Y %I:%M:%S I've tried this with %p and %P with no success.   Any ideas?    
Search: index=xxxx sourcetype=xxxxx home_feature!=connectapp application_name IN(artical, login, management, pageout) |table Description application _time count |sort Description _time home_f... See more...
Search: index=xxxx sourcetype=xxxxx home_feature!=connectapp application_name IN(artical, login, management, pageout) |table Description application _time count |sort Description _time home_feature application_name streamstats current=f window=1 values( Description) as desp values(home_feature) as app values(_time) as totaltime values (count) as totalcount |eval siml=if(home_feature == app AND Description == desp, count - totalcount,0) |eval siml2=if(siml <0, Count, siml) |where siml2 > 0 |eval time=strftime(now(), %d/%m/%YT%H:%M:%S) |stats sum(value) by home_feature, application_name Output: home_feature application_name sum(value) ampt.gc.com login 298 ampt.gc.com pageout 2341 https:gtt.com artical 4567 wcw.gft.com management 678 app.df.com login 499 rt.hj.com pageout 567 tt.com artical 345 ggt.com management 178   but i need the output as shown below: _time home_feature login pageout management artical 03/02/2023T14:05:15 ampt.gc.com 298 100 678 567 03/02/2023T12:05:15  ampt.gc.com 345 345 12341 789 03/02/2023T11:05:15  https:gtt.com 100 45678 9087 4567 03/02/2023T10:05:15  wcw.gft.com 456 567 678 789 03/02/2023T09:05:15  app.df.com 900 345 23499 3215 03/02/2023T08:05:15  rt.hj.com 789 125 567 678 03/02/2023T06:05:15  tt.com 12 34 345 45 03/02/2023T04:05:15   ggt.com 23 14 178 34   how to achieve this?
A have a lookup table that includes a "time" column (timeformat=%m/%d/%Y %H:%M:%S). Can someone please help me develop a search that counts the number of dates within that column that are within the ... See more...
A have a lookup table that includes a "time" column (timeformat=%m/%d/%Y %H:%M:%S). Can someone please help me develop a search that counts the number of dates within that column that are within the last 30 days? Thank you in advance.  Sven
Hi,  Splunk Version: 8.2.6 Index Type: Metrics Index Dashboard type: Dashboard Studio   I've followed the document here https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/i... See more...
Hi,  Splunk Version: 8.2.6 Index Type: Metrics Index Dashboard type: Dashboard Studio   I've followed the document here https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/inputs to try to create an input that is populated via a datasource (I've also looked at a number of questions on this forum). The specified configuration simply does not work: "items": ">frame(label, value) | prepend(formattedStatics) | objects()", I mean what does this even mean? The syntax is pretty obscure and there is no documentation. I've looked at the `formattedStatics` bit and created exactly the same configuration, but using my own data source and the input simply shows the default value and does not display the results from the query. My datasource is the following:     |mcatalog values("userName") as userName where index=myapp_metrics       I've tried all sorts of combination including     "label": ">primary | seriesByName(\"userName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"userName\") | renameSeries(\"value\") | formatByType(formattedConfig)"     None of this has worked. All the examples I've seen are for an Events index. Is there something special I need to do for a Metrics index?  
I am trying to make 2 searches using the same index and source. The first search is looking for all entries with "message received" and the "Message Id". The second search is looking for the "Messa... See more...
I am trying to make 2 searches using the same index and source. The first search is looking for all entries with "message received" and the "Message Id". The second search is looking for the "Message Id" and "Message Type".  I am trying to find the number of messages received for a specific message type. Example: (index = exampleindex source = examplesource ("message received" AND "MessageId")  | rex field=_raw "MessageId:(?<messageId1>[\S]+)\s.*"  | eval Id1 = messageId1) OR (index = exampleindex source = examplesource ("MessageType" AND "MessageId")  | rex field=_raw "MessageId:(?<messageId2>[\S]+)\s.*"  | eval Id2 = messageId2)  I want to compare the 2 searches and count how many Ids in the 1st search appear in the 2nd search  
Hi All, I am in need of your help. I am building a dashboard and included time range picker as input. I have hidden some of the options in time range picker (relative panel, date range panel, rea... See more...
Hi All, I am in need of your help. I am building a dashboard and included time range picker as input. I have hidden some of the options in time range picker (relative panel, date range panel, realtime panel etc) I am trying to add new time option under presets (last 15 days). Please guide me how to achieve this
I have a field called start.point and end.point in my logs. We can assume it has values in x and y coordinates. A part of my raw logs looks like this:  ... , "start" : { "time_s" : 1234 , "point"... See more...
I have a field called start.point and end.point in my logs. We can assume it has values in x and y coordinates. A part of my raw logs looks like this:  ... , "start" : { "time_s" : 1234 , "point" : [2.5,5.5]}, "end" : { "time_s" : 2344 , "point" : [9.5,8.5]}, ... And in the list view, when I select the fields it shows in this format:  start.point{} = 2.5 start.point{} = 5.5 end.point{} = 9.5 end.point{} = 2.5    Now, all I want to do is calculate the distance from start point to the end point and construct a dasbboard that shows the distance graph. The formula is: √[(x2 – x1)^2 + (y2 – y1)^2]. However, I am having difficulty extracting the values x1, y1 and x2, y2 from the field start.point and end.point. 
We're observing hundreds of tokens in the tokens page created by Mission Control, for the same couple of users. Most of them are never used. Multiple tokens are created in short timeframes. Is this a... See more...
We're observing hundreds of tokens in the tokens page created by Mission Control, for the same couple of users. Most of them are never used. Multiple tokens are created in short timeframes. Is this a known issue?
Hi Community In my index vital metrics how can i find host status ( which can take up or down values) Up when host is up Down when host is down
The configtracker index contains a json path of: data.changes{}.properties{} In that path, there are numerous objects ...  data     changes          properties                + ( contains n... See more...
The configtracker index contains a json path of: data.changes{}.properties{} In that path, there are numerous objects ...  data     changes          properties                + ( contains name, old_value, new_value )                + ( contains name, old_value, new_value )                + ( contains name, old_value, new_value ) I've tried numerous ways of parsing data.changes{}.properties{} ... but am still finding myself unable to display the name, old_value, and new_value of each object beneath data->changes->properties ... Ultimately, I'd like to be able to render a table of "name" where an old_value exists so that we can alert on changed correlation searches in ES. ie: where "name" = search (and both old_value and new_value are not empty) { [-]              name: search              new_value: `sysmon` foo              old_value: `sysmon` bar            } or: where "name" = cron_schedule (and both old_value and new_value are not empty) { [-]              name: cron_schedule              new_value: 6-56/10 * * * *              old_value: */10 * * * *            }  or: where a search schedule was enabled { [-]              name: enableSched              new_value: 1              old_value: 0            }
I am about to upgrade 8.1.3 distributed / clustered environment to 9.0.4. Per Docs>  Migrate your App Key Value Store storage engine from the Memory Mapped (MMAP) storage engine to the Wired Tige... See more...
I am about to upgrade 8.1.3 distributed / clustered environment to 9.0.4. Per Docs>  Migrate your App Key Value Store storage engine from the Memory Mapped (MMAP) storage engine to the Wired Tiger storage engine, and update your MongoDB version from 3.6 to 4.2. These updates are required in Splunk Enterprise 9.0. See Migrate the KV store storage engine in the Admin manual to plan your migration. Back up your App Key Value Store (KV Store) databases prior to starting an upgrade. If you run version 7.1 and lower of Splunk Enterprise, you must stop Splunk Enterprise instances first. I am going to do this part prior to the Upgrade. I inherited this deployment so IDK where all the KVstore(s) are located or which nodes have a default KVstore needing the same migration/upgrade (e.g. MC or DS). My Environment> SHC(4), SHC-deployer, IDXCM, IDXC(10), MC/LM, DS, HFs, UFs Any advice appreciated.  If someone can share knowledge such as, how to validate KVstore locations and which KVstore(s) need update...  that would help.   Thank you Thank you
We have a list of authorized user who have to specific Database and created a lookup table name "Authorized_list.csv". tried a search query for any unathorized user/s access db apart of that lookup t... See more...
We have a list of authorized user who have to specific Database and created a lookup table name "Authorized_list.csv". tried a search query for any unathorized user/s access db apart of that lookup table, need to be notified.  
Hello,  I have a splunk heavy forwarder (splunk 9.0.0.1, centos 7) configured as a heavy forwarder.  When I issue "splunk stop" as either the splunk user or user root, it will restart within a minu... See more...
Hello,  I have a splunk heavy forwarder (splunk 9.0.0.1, centos 7) configured as a heavy forwarder.  When I issue "splunk stop" as either the splunk user or user root, it will restart within a minute. This system has been configured to start as a systemd service and user splunk is allowed to issue systemctl commands to stop and restart. just now, I completely disabled the systemd service and then rebooted.  After about a minute of being fully booted, splunk automatically started up.  Its as if there's a parasitic cron or anacron job that starts it if its not running. What could be causing this? I would really rather splunk stayed down especially since I am usually altering the configuration files at the time and it starts up before I can complete the task. --jason
Hi team, Currently, I'm in project to work with Splunk. The project is building with Spring boot and Webflux Reactive Programming. I could not find any document or code about Reactive Programmin... See more...
Hi team, Currently, I'm in project to work with Splunk. The project is building with Spring boot and Webflux Reactive Programming. I could not find any document or code about Reactive Programming with Splunk client in Java/Spring. Can you help to confirm that Spunk supports Reactive Programming or not? if not, then I can ask my team to stop apply Webflux in our project. Thanks, Minh Nguyen