All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi There, I have got some results in after running the below command my search |  | bucket _time span=1h | stats count by _time http_status | eventstats sum(count) as totalCount by _time | eval... See more...
Hi There, I have got some results in after running the below command my search |  | bucket _time span=1h | stats count by _time http_status | eventstats sum(count) as totalCount by _time | eval percent=round((count/totalCount),3)*100 | fields - count - totalCount Output is as follows time                                      status                    percent 2022-03-02 05:30:00 100 10.0 2022-03-02 05:30:00 200 30.0 2022-03-02 05:30:00 300 60.0 2022-03-02 06:30:00 100 30.0 2022-03-02 06:30:00 200 60.0 2022-03-02 07:30:00 300 10.0 2022-03-02 07:30:00 100 20.0 2022-03-02 07:30:00 200 30.0 2022-03-02 06:30:00 300 50.0   I am trying to transpose the output as below : time                                     100                        200     300  2022-03-02 05:30:00 10.0 30.0 60.0 2022-03-02 06:30:00 30.0 60.0 10.0 2022-03-02 07:30:00 20.0 30.0 50.0   please assist
Hi All,  Have searched for many months and unable to locate what i need. something i believe should be so simple is alluding me..looking for some help on this.  I am trying to change the colour o... See more...
Hi All,  Have searched for many months and unable to locate what i need. something i believe should be so simple is alluding me..looking for some help on this.  I am trying to change the colour of a bar / column chart to represent a different colour per the size of the shop and showing how many alarm incidents they have had. Visually this should allow me to see a comparison of alerts across my Large shops / my Small Shops by looking at a color only and not having to remember each shops size. ie all green shops are small.  My test table is as followed : On my Graph i would like for the Size of the Shop to be color coded - Large - blue, Medium-yellow, Small-green ( the color of the 3 sizes i am not fussy on) .  Shop Size TypeReport NoEvents A Large FrontAlarm 76 A Large BackAlarm 115 B Small FrontAlarm 37 B Small BackAlarm 132 C Medium FrontAlarm 81 C Medium BackAlarm 39 D Large FrontAlarm 159 D Large BackAlarm 110 E Small FrontAlarm 26 E Small BackAlarm 71 F Medium FrontAlarm 113 F Medium BackAlarm 49   I have tried several Evals but just do not see to be able to get this right. I have tried to follow several answers within the splunk community on this topic, but due to the answers evaluating time - its throwing me out and thus losing that last piece to the puzzle - i have been trying things such as -  | inputlookup Testcolor.csv | search TypeReport="FrontAlarm" | stats count by NoEvents | eval {NoEvents}=count | fields - count and changing the source with the below but still no luck.  <option name="charting.fieldColors">{"A":#32a838,"B":#006D9C,"C":#006D9C,"D":#32a838,"E":#006D9C,"F":#006D9 }</option> To even trying  | inputlookup Testcolor.csv | search TypeReport="FrontAlarm" | stats count by NoEvents | eval Shop="A, B, C, D, E, F" | makemv Shop delim="," | mvexpand Shop | eval count=NoEvents | table Shop count | eval {Shop}=count | fields - count   The above Seemed to get me close but no cigar. I have another 6 weeks before i really need to figure this out, any help would be appreciated.  ( Id also prefer to build this in dashboard studio if that does help my problem - i am also only using static data so times are pulled in)  Cheers    
If I do an index search, raw events are listed in reverse _time order, which is often also the reverse _indextime order so I don't exactly know which.  But if I table the results, the table is no lon... See more...
If I do an index search, raw events are listed in reverse _time order, which is often also the reverse _indextime order so I don't exactly know which.  But if I table the results, the table is no longer in this order.  Why is it so? I used the following to inspect table   sourcetype=sometype | eval indextime=strftime(_indextime, "%F %T") | table _time indextime   The table kind of list later entries first, but not consistent, often swapped by hours.
Hello all, I'd like to compare events in the same log files, amusing the format of the events are the same. For example: event1: ccc, ddd event2: bbb, ccc event3: aaa, bbb As you can see there's... See more...
Hello all, I'd like to compare events in the same log files, amusing the format of the events are the same. For example: event1: ccc, ddd event2: bbb, ccc event3: aaa, bbb As you can see there's a pattern, the 2nd part (bbb) in event3 is always the same as the 1st part in event2, and the 2nd part in event 2 (ccc) is always the same as the 1st part in event 1. My question is how do I check if all the events in the same log file match this pattern. Thank you in advance! Sincerely, Gai
Hey everyone, I am trying to gauge at what time users are active on our app. I want to use data from (All time) to gather the average on a 24 hour scale. Is there a way for I can see the average tim... See more...
Hey everyone, I am trying to gauge at what time users are active on our app. I want to use data from (All time) to gather the average on a 24 hour scale. Is there a way for I can see the average time by hour. Right now this just shows the times when users login. It would be super useful for I can know how many users on average use the app by X AM/PM. My current query is:  index=app1 AND service=app AND logLevel=INFO AND environment=prod "message.eventAction"=START_SESSION |timechart span=1h count This query can gather the users by hour on a 24 hour scale, but not the average from (All time). If anyone could help, it would be greatly appreciated!
I want to create a 30 day index of data that changes it's indexed timestamp as each day passes. Therefore the data will always show up when I do a last 30 day search and don't need to pick out the sp... See more...
I want to create a 30 day index of data that changes it's indexed timestamp as each day passes. Therefore the data will always show up when I do a last 30 day search and don't need to pick out the specific 30 days I saved. Ie if I started with January data, in June 1st the original data from January should represent the month of May. Is there any way to change the time of the data in the index every day? Or does it have to be deleted from index and readded? 
Hello all, I have a simple dashboard with a dropdown under the title. When I add styles into the title, the dropdown input element interferes with that and the full hight of the title panel is not vi... See more...
Hello all, I have a simple dashboard with a dropdown under the title. When I add styles into the title, the dropdown input element interferes with that and the full hight of the title panel is not visible.  This is my current inline style content. I want to display the full hight of my title panel. Can anyone help?     <label>Endpoint Configurations Summary Dashboard</label> <row depends="$alwaysHideCSSPanel$"> <panel> <html> <style> .dashboard-panel h2{ background:#6495ED !important; color:white !important; text-align: center !important; font-weight: bold !important; border-top-right-radius: 15px; border-top-left-radius: 15px; } .highcharts-background { fill: #ffffff !important; } .highcharts-grid-line{ fill: #ffffff !important; } h1 { background:#6495ED !important; color:white !important; text-align: center !important; font-weight: bold !important; border-top-right-radius: 15px; border-top-left-radius: 15px; } h2, h3, p { color: #696969 !important; text-align: center !important; } </style> </html> </panel> </row>      
I want to export the result of a Splunk dashboard and authentication would be via SSO/SAML. I can provide the username and password and Splunk dashboard URL so that python will export the dashboard p... See more...
I want to export the result of a Splunk dashboard and authentication would be via SSO/SAML. I can provide the username and password and Splunk dashboard URL so that python will export the dashboard pannel and save the exported result in csv.
Hello, I have a search that runs in the web application interface (Splunk Enterprise). It returns results as and when log events are present within the search parameters (time window). I execute ... See more...
Hello, I have a search that runs in the web application interface (Splunk Enterprise). It returns results as and when log events are present within the search parameters (time window). I execute the exact same search at the same time via the REST API using Postman, it completes (Job status="DONE") but with zero available events or any events at all. Why might that happen? The search is copied and pasted from the web app to the API call in Postman. On occasion, it has worked but maybe one in a thousand calls will fetch results. Thank you.
I'm attempting to build a search around Okta authentication logs.  I want to run a query to check for any Multi factor update/change, collect the user ID and pass that to another search where I see t... See more...
I'm attempting to build a search around Okta authentication logs.  I want to run a query to check for any Multi factor update/change, collect the user ID and pass that to another search where I see the geolocation data where the User has authenticated previously over a specific time span.  Essentially, I'm trying to build a search to see if a user that requested an MFA change is doing it from a different geolocation than they normally authenticate from. The query below shows all users that have have a MFA change with their corresponding geolocation data.  Is there a way to pass the user ID(s) to a different search where I can look at 7 days worth of their authentication activity to see if the geolocation matches?  I've researched sub-searches but that doesn't work because I need the user ID first but the subsearch runs first and I don't have the user ID yet.  I looked at map which seems like it's the best solution, but there a lot of warnings about it being resource intensive.  If anyone can point me in the right direction, it would be very much appreciated.     index=okta eventType="user.mfa.factor.update" | stats values(actor.id), values(client.geographicalContext.State)      
Hello, How would I initialize monitor command to pull the data/files from variable paths /locations? Some examples along with monitor command provided below: Paths/Locations: /RTAM/PROD_LOGS/PR... See more...
Hello, How would I initialize monitor command to pull the data/files from variable paths /locations? Some examples along with monitor command provided below: Paths/Locations: /RTAM/PROD_LOGS/PROD_DATA/2021-01-28_03-39-15/AUDITDATA/APPS/AUDITPROD.txt /RTAM/PROD_LOGS/PROD_DATA/2021-01-29_09-12-12/AUDITDATA/APPS/AUDITPROD.txt ......... .......... ......... /RTAM/PROD_LOGS/PROD_DATA/2021-02-02_06-19-10/AUDITDATA/APPS/AUDITPROD.txt /RTAM/PROD_LOGS/PROD_DATA/2021-02-02_08-07-14/AUDITDATA/APPS/AUDITPROD.txt Monitor Command I wrote: [monitor:// /RTAM/PROD_LOGS/PROD_DATA/.../AUDITDATA/APPS/AUDITPROD.txt] is this going to work to pull the data/files from all of the locations mentioned above? Any help/feedback will be highly appreciated. Thank you.
Hello Splunk Community, How can I move the addtotals field to display as the first column and not last for this chart?  Currently:  _time Host123 Host456 total  2022-02-24 22:00 0 2... See more...
Hello Splunk Community, How can I move the addtotals field to display as the first column and not last for this chart?  Currently:  _time Host123 Host456 total  2022-02-24 22:00 0 2 2 Would like: _time total Host123 Host456 2022-02-24 22:00 2 0 2 Current Code: index="Dept_data_idx" eventType="Created" status="success" host=* | bucket _time span=1h | stats count by _time host | addtotals
I'm trying to create a calculated field (eval) that will coalesce a bunch of username fields, then perform match() and replace() functions within a case statement. Here's a scenario: Possible u... See more...
I'm trying to create a calculated field (eval) that will coalesce a bunch of username fields, then perform match() and replace() functions within a case statement. Here's a scenario: Possible user fields: UserName, username, User_ID User values need domain removed (e.g., "user@domain.com" or "ad\user" needs to be "user"). Here is how it can be done in two evals (I newlined and indented each case for readability):   | eval user_coalesced = coalesce(UserName, username, User_ID) | eval user = case( match(user_coalesced, ".*@.*"), replace(user_coalesced, "@.*", ""), match(user_coalesced, "^ad\\\\"), replace(user_coalesced, "^ad\\\\", ""), true(), user )    Any ideas on how I can get this down to one? I thought about putting the coalesce() into each case, but that seems inefficient.
I have a dashboard that is based on a scheduled report, the report is schedule to run at 06:00 every day and every day the job shows as done with success status however they is nothing in the report.... See more...
I have a dashboard that is based on a scheduled report, the report is schedule to run at 06:00 every day and every day the job shows as done with success status however they is nothing in the report. When I run the report manually it takes 1 hour for the report to complete with lot of search result (events) however when scheduled it’s show “Done” after 1 hour (sometime couple of minutes) with an empty report (0 events) Why is the report not generation result, can you help troubleshoot the problem?
We have lots of firewalls (both internal and internet facing) feeding into our CIM Network_Traffic Model within Enterprise Security. I would like to be able to distinguish the traffic that comes from... See more...
We have lots of firewalls (both internal and internet facing) feeding into our CIM Network_Traffic Model within Enterprise Security. I would like to be able to distinguish the traffic that comes from the internet with other traffic. One way that occurred to be is to modify the CIM Network_Traffic model to have an extra "inheritance" (alongside Allowed_Traffic and Blocked_Traffic). Something line Internet_Traffic with the constraint specifying the appropriate dvc and src_interface values.  Is this a good idea? Would it break anything? How would it work w.r.t. update/upgrades to the CIM model?
Using Splunk Cloud and management made the decision to send from UF's straight to Splunk Cloud indexers.  As such, have run into a number of issues with various TA's not deployed to Cloud indexers.  ... See more...
Using Splunk Cloud and management made the decision to send from UF's straight to Splunk Cloud indexers.  As such, have run into a number of issues with various TA's not deployed to Cloud indexers.  How can I generate a list of deployed aps/TA's that are on the Cloud indexers?
Hi, I'm trying to route data to a specific index based on a value in a field. I have a series of data that look like this:   Mar 1 16:26:52 xxx.xxx.xxx.xxx Mar 01 2022 16:26:52 hostname : %F... See more...
Hi, I'm trying to route data to a specific index based on a value in a field. I have a series of data that look like this:   Mar 1 16:26:52 xxx.xxx.xxx.xxx Mar 01 2022 16:26:52 hostname : %FTD-6-113008: AAA transaction status ACCEPT : user = username Mar 1 17:42:18 xxx.xxx.xxx.xxx Mar 01 2022 17:42:18 hostname : %ASA-6-611101: User authentication succeeded: IP address: xxx.xxx.xxx.xxx, Uname: username   My props.conf on indexer looks like this:   [cisco:asa] TRANSFORMS-01_index = force_index_asa_audit   My transforms.conf on indexer looks like this:   [force_index_asa_audit] DEST_KEY = _MetaData:Index REGEX =(?:ASA|FTD)-\d+-(?:113008|113012|113004|113005|611101|605005|713166|713167|713185|716038|716039|713198|502103|111008|111010) FORMAT = asa_audit   But unfortunatly nothing happens. I've tryed also using source in props.conf with no successful  result. Do you have any idea? Thank a lot Marta
Hi, Indexer cluster peers status are fluctuating few of the times from up to pending. I have verified, there is no resources shortage problems (memory/cpu) when the indexer cluster peers fluctuate... See more...
Hi, Indexer cluster peers status are fluctuating few of the times from up to pending. I have verified, there is no resources shortage problems (memory/cpu) when the indexer cluster peers fluctuated. what could be the reason for fluctuating indexer peers?   splunk/Health.logs are showing below messages.   01-02-2022 02:00:11.893 +0000 INFO PeriodicHealthReporter - feature="Indexers" color=yellow indicator="missing_peers" due_to_threshold_value=1 measured_value=3 reason="The following peers are in transition: Indexer1(Pending), Indexer2(Pending), Indexer3(Pending). " node_type=indicator node_path=splunkd.indexer_clustering.indexers.missing_peers   Source watchdog/watchdog.log are showing below messages   01-02-2022 02:15:19.01 +0000 ERROR Watchdog - No response received from IMonitoredThread=0x7fe9c63f70 within 8000 ms. Looks like thread name='CMMasterRemoteStorageThread' tid=28158 is busy !? Starting to trace with 8000 ms interval. 01-02-2022 02:16:23.12 +0000 INFO Watchdog - Stopping trace. Response for IMonitoredThread ptr=0x7fefb70 - thread name='CMMasterRemoteStorageThread' tid=28158 - finally received after 72049 ms (estimation only).  
How do I create a search that would display: The time, user, hostname, and URL those a list of users are visiting.
Windows disk performance latencies ("Avg. Disk sec/Transfer", etc.) are given in seconds. Splunk_TA_windows renders these for the CIM as: [Perfmon:LogicalDisk] EVAL-latency = if(counter=="Avg. Disk... See more...
Windows disk performance latencies ("Avg. Disk sec/Transfer", etc.) are given in seconds. Splunk_TA_windows renders these for the CIM as: [Perfmon:LogicalDisk] EVAL-latency = if(counter=="Avg. Disk sec/Transfer",Value*1000,null()) EVAL-read_latency = if(counter=="Avg. Disk sec/Read",Value,null()) EVAL-write_latency = if(counter=="Avg. Disk sec/Write",Value,null()) Why is one given as milliseconds, but the others as seconds?