All Topics

Top

All Topics

I have the following query displaying as a table in a classic dashboard:     | makeresults format=json data="[{\"item\":\"disk1\", \"size\":2147483648, \"size_pretty\":\"2 GB\"}, {\"item\":\"disk2... See more...
I have the following query displaying as a table in a classic dashboard:     | makeresults format=json data="[{\"item\":\"disk1\", \"size\":2147483648, \"size_pretty\":\"2 GB\"}, {\"item\":\"disk2\", \"size\":1099511627776, \"size_pretty\":\"1 TB\"}]" | table item size size_pretty     Now when you sort by "size" the table works as expected (2GB is smaller than 1TB). When you sort by "size_pretty" though, it of course will think that "1 TB" is first in order, followed by "2 GB" (lexicographic sort order).   What I would like however (purely about user experience) is to  1) Hide the "size" column as it will be pretty horrible to read 2) When the user clicks the "size_pretty"  column to sort the table, I want it to actually sort by "size" (up or down) - Even though that column is not visible to the user, meaning the output (sorted smallest to largest) would look like: item size_pretty disk 1 2 GB disk 2 1 TB   Is there any way to achieve this? Note that I am on Splunk Cloud, so I do not have access to the file system.   (if it can be done on a dynamic dashboard instead, i'd consider that)   Bonus points if I can also apply column formatting with a colour scale as you would on a normal table  
I am having trouble with ingesting my data into Splunk consistently. I have an XML log file that is constantly being written into (about 100 entry per minute) however,  when I search for the data in ... See more...
I am having trouble with ingesting my data into Splunk consistently. I have an XML log file that is constantly being written into (about 100 entry per minute) however,  when I search for the data in Splunk I am only seeing sporadic results of the data in Splunk where I see results for 10 minutes then nothing for the next 20 and so on and so forth .  I have my inputs and props config below.  inputs config: [monitor:///var/log/sample_xml_file.xml] disabled = false index = sample_xml_index sourcetype= sample_xml_st       props.conf: --------------------- [ sample_xml_st ] CHARSET=UTF-8 KV_MODE=xml LINE_BREAKER=(<log_entry>) NO_BINARY_CHECK=true SHOULD_LINEMERGE=FALSE TIME_FORMAT=%Y%m%d-%H:%M:%S TIME_PREFIX=<log_time> TRUNCATE=0 description=describing props config disabled=false pulldown_type=1 TZ=-05:00 --------------------- Sample xml log: <?xml version="1.0" encoding="utf-8" ?> <log>   <log_entry>     <log_time>20230724-05:42:00</log_time>     <description>some random data 1</description>   </log_entry>    <log_entry>     <log_time>20230724-05:43:00</log_time>     <description>some random data 2</description>   </log_entry>    <log_entry>     <log_time>20230724-05:43:20</log_time>     <description>some random data 3</description>   </log_entry> </log> And this xml log file gets constantly written into with the a new log_entry 
Hello Members, I have seen and used the accum command, but it does not quite give me what I want. I have this search below which gives me a line chart with event count over the time range: inde... See more...
Hello Members, I have seen and used the accum command, but it does not quite give me what I want. I have this search below which gives me a line chart with event count over the time range: index=main sourcetype=cisco:asa host=* message_id=113004 | eval Date=strftime(_time, "%Y-%m-%d %H:%M:%S") | timechart count BY message_id The graph type can be any type. I would like to get an accumulated total for a time period, like 24 hours, OK to count every hour, but show the accumulated count each hour, with the ending total for the time range, i.e. 24hr. Thanks for greate source of help here, eholz1
When I ran the following query:     index="myindex" sourcetype="hamlet" environment=staging | top limit=10 client | eval percent = round(percent) | rename client AS "Users", count AS "Requests"... See more...
When I ran the following query:     index="myindex" sourcetype="hamlet" environment=staging | top limit=10 client | eval percent = round(percent) | rename client AS "Users", count AS "Requests", percent AS "Percentage %"     I get these results: Users Requests Percentage % joe.smith@alora.com 118 21 martha.taylor@gmail.com 80 14 paul.gatsby@aol.com 68 12   What I want instead are these results Users Requests Percentage % joe.smith 118 21 martha.taylor 80 14 paul.gatsby 68 12   I hope this helps. Sorry if my original post was confusing. I appreciate your help. Thank you    
Currently we have Microsoft IIS Web-Servers out in the environment, but the fields they are logging is spotty. Is there any way to enable logging for all available fields?  We have a deployment serv... See more...
Currently we have Microsoft IIS Web-Servers out in the environment, but the fields they are logging is spotty. Is there any way to enable logging for all available fields?  We have a deployment server, would that be of help in this situation?   For context, I've included a list of some specific fields we're looking for below: date, time, c-ip, cs-username, s-ip, s-port, cs-method, cs-uri-stem, etc.
Hi guys! I have a static snapshot lookup that stores a lot of information about vulnerabilities actives on my hosts in mar/01. This SPL shows me a full list of the unique identifiers. | inputlooku... See more...
Hi guys! I have a static snapshot lookup that stores a lot of information about vulnerabilities actives on my hosts in mar/01. This SPL shows me a full list of the unique identifiers. | inputlookup gvul:collectMar.csv | table UniqID   This SPL shows me a list of the unique identifiers actives today earlyest=-1d index=myindex sourcetype=mysourcetype | table UniqID    My team works to fix this vulnerabilities, so I want a timechart to show the work progress, based on the snapshot lookup. I don care about new vulnerabilities since the snapshot. This is the SPL that I'm using to do this. earliest=1677719215 index=myindex sourcetype=mysourcetype | join type=inner UniqID [ | inputlookup gvul:collectMar.csv | table UniqID] | timechart span=1d count(UniqID)   So, is there a way to do this but not using a join statement?
My team needs to create a dashboard that monitors the number of DB connections per DB agent, and I'm pretty sure that I've found the metric for this in the metric browser under the DB tab (number of ... See more...
My team needs to create a dashboard that monitors the number of DB connections per DB agent, and I'm pretty sure that I've found the metric for this in the metric browser under the DB tab (number of DB nodes), but I'm unable to find it when I'm adding a widget. Not sure if there is some other way to monitor this or if I'm missing something obvious.
Hi community,  I have an issue where I am ingesting some xml data but the data coming in is very sporadic. Any idea what could be causing this issue?   
Im trying to show what users logged into AWS with an assigned role and what they accessed/changed. Is there a specific AWS audit log I need to have ingested? We have people making changes with no d... See more...
Im trying to show what users logged into AWS with an assigned role and what they accessed/changed. Is there a specific AWS audit log I need to have ingested? We have people making changes with no documentation of when they did a change or when they logged in.
hi, I have two KV_Store lookups as they are huge: * one is more than 250k rows * second and 65k rows.  In "250k" row lookup is only IP while in second one are IP CIDR+LIST So,  I do search li... See more...
hi, I have two KV_Store lookups as they are huge: * one is more than 250k rows * second and 65k rows.  In "250k" row lookup is only IP while in second one are IP CIDR+LIST So,  I do search like     | inputlookup list_250k | rename ip_cidr as ip | eval convert_ip=tostring(ip) | lookup list_65k ip_cidr AS convert_ip OUTPUT ip_cidr, list | where isNotNull(ip_cidr) | rename ip_cidr as found_in      I am getting results. I am curious are there any limits? if for example search is limited, would I see some error  (as there is no progress bar that it's working something)?
I am successfully using some simple LDAPSEARCH + LDAPGROUP searches to produce membership lists for various AD groups. Ex.     | ldapsearch search="(&(objectClass=Group)(cn=Remote Desktop Use... See more...
I am successfully using some simple LDAPSEARCH + LDAPGROUP searches to produce membership lists for various AD groups. Ex.     | ldapsearch search="(&(objectClass=Group)(cn=Remote Desktop Users)" | table cn,distinguishedName | ldapgroup | table cn,member_name,member_type     The searches work, but always show me all possible membership types (DIRECT, NESTED, etc.) I would like to filter my end results so that only DIRECT members are returned. I've tried inserting a | WHERE clause after the LDAPGROUP or final table command, but it just returns an empty set. I'm clearly not quite understanding how to pipeline output correctly.  Any help is much appreciated.   
Hi Splunkers, I'm trying to send one Splunk Cloud instance alerts to a Cortex XSOAR one. The guide I'm following is the following one: Splunk py for NON ES users. Due we have not Enterprise Security... See more...
Hi Splunkers, I'm trying to send one Splunk Cloud instance alerts to a Cortex XSOAR one. The guide I'm following is the following one: Splunk py for NON ES users. Due we have not Enterprise Security, I must follow steps described in section Splunk non-Enterprise Security Users.  Following this guide, what makes me a bit confused is step 4; they states only to create a macro to capture fields saved on a local file, but no indication on how to achieve this. I mean: I know I can put in macro code I want to reuse in more alerts, for example reuse the same where conditions in multiple different rules, but how should be the syntax to achieve what the guide states? For example, if following step 2 and 3 I got the following field list: list(field) Blade, ReachedPorts, count, dst_ip, earliest_timestamp, latest_timestamp, src_ip, src_user, total_dst_port, what code should I put on macro to "capture" them? Should I simply put fields list in macro code?  
Hello Splunkers, I would like to know if it's possible to prevent /lib path of a Splunk app to be overridden after an upgrade of this app. For instance, I edited the cacert.pem file in the follow... See more...
Hello Splunkers, I would like to know if it's possible to prevent /lib path of a Splunk app to be overridden after an upgrade of this app. For instance, I edited the cacert.pem file in the following path /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/certifi/cacert.pem  And after the upgrade of the Splunk_TA_microsoft-cloudservices app, the cacert.pem has been replaced during the upgrade. Would it be possible to avoid that ? Regards, GaetanVP  
Hello everyone! I have a bit of a challenge I have run into. Long story short, we are required to show our customer that we are completing audits each time period. I saw another sites dashboard rec... See more...
Hello everyone! I have a bit of a challenge I have run into. Long story short, we are required to show our customer that we are completing audits each time period. I saw another sites dashboard recently, and am trying to replicate it, only issue is they didn't do it themselves so they don't know how its done.   I am going to explain it as best as I can. Basically, when they open a dashboard, a table populates with a comment, recorded by, and recorded on field.  The have the option to add a record to it, a comment box pops up, and they add whatever the comment is, and then click save.  The table then displays their comment, what their user name is, and a time stamp.  It saves week after week, and has a search records bar to find specific events as well.   Does anyone have any idea how to create something like this?    
I need to create a query where I need to populate a column's values from a lookup table as a default value.  Basically, replace all All in the below value with the values of the column.  index = "X... See more...
I need to create a query where I need to populate a column's values from a lookup table as a default value.  Basically, replace all All in the below value with the values of the column.  index = "XXX"  log_group IN("All") | stats sum(event) Thanks 
Hi everyone We are currently facing an issue using a load balancer with a search head cluster. This is an Elastic Load Balancer in AWS and we can't use the Load Balancer cookie, has to be an applic... See more...
Hi everyone We are currently facing an issue using a load balancer with a search head cluster. This is an Elastic Load Balancer in AWS and we can't use the Load Balancer cookie, has to be an application cookie. I have seen this issue before and seen people fix it by using the cookie session_id_PORTNUMBER, but our deployment doesn't need the user to connect to splunk web, it' just a service connecting through the ELB to the rest Api in port 8090(Had to change from default 8089). I have also tried splunkd_443 to no avail. The problem is that the ELB is throwing the petitions for login and for getjobs to different servers. Is there any other application cookie I could use? This is for splunk 9.0.4 Thanks everyone!
Hello Team, I presently have a single numeric value that is published to AppD every day, below is a sample of this extract: name=Custom Metrics|GRA|Pipeline View|WinSec|Total Input, value=7366, ag... See more...
Hello Team, I presently have a single numeric value that is published to AppD every day, below is a sample of this extract: name=Custom Metrics|GRA|Pipeline View|WinSec|Total Input, value=7366, aggregator=AVERAGE, time-rollup=AVERAGE, cluster-rollup=INDIVIDUAL Over the course of a month we will have a curve with 30x data points, as depicted by the screenshot below. Below are the requirements that we are trying to achieve, but I do not know where to start with the design and implementation, and would really appreciate some tips: ---------------------------------------------------------------------- Variance Warning Alert The warning alert will be based on a comparison of todays metric versus the metric from the previous day or week, based on the below rules Resource: Daily cadence report located below. Rules: Sat total is compared to prev Sat total Sun total is compared to prev Sun total Mon total is compared to prev Mon total Tues total is compared to Mon (y'day) total Wed total is compared to Tues (y'day) total Thur total is compared to Wed (y'day) total Fri total is compared to Thurs (y'day) total Calculation (Day to day variation): allowableWarningMarginValue = todays metric * 0.5 allowableCriticalMarginValue = todays metric * 0.8 ingestVariation = Abs(todays total - comparison day total) If (ingestVariation >= allowableWarningMarginValue and ingestVariation < allowableCriticalMarginValue) then   set heatmap value = AMBER   generate a warning message Else If (ingestVariation >= allowableCriticalMarginValue) then   set heatmap value = RED   generate a critical error message Else set heatmap value = GREEN Endif Example (Using Friday June 23 WinSec pipeline): Todays Total (Fri June 23) = 7366 Comparison Total (Thurs June 22) = 300 allowableWarningMarginValue = 7366 * 0.5 = 3683 allowableCriticalMarginValue = 7366 * 0.8 = 5892 ingestVariation = abs(7366 - 300) = 7066 if ingestVariation (7066) > allowableCriticalMarginValue (5892)   set heatmap value = RED   generate a critical message else if ingestVariation (7066) > allowableWarningMarginValue (3683)   set heatmap value = AMBER   generate a warning message else set heatmap value = GREEN endif AppD Result: For a given day I would like to simply output a single heatmap colour. The below screenshot illustrates how this heatmap colour would look over successive days, but each daily heatmap colour is simply overridden each day. Thank you in advance for any help you can provide. Kind regards, Tony.
I have a data like below Servename     downtime Web1               7 day 2 h Web2                2 h 23 min Web2               20 min Web3.               17 min   Have  severname and downtime ... See more...
I have a data like below Servename     downtime Web1               7 day 2 h Web2                2 h 23 min Web2               20 min Web3.               17 min   Have  severname and downtime fields. Servernames can be same with different donwtime available.   How can I group this by servename by adding the downtimes in to single row of per server. Help us with spl query
Hi All, We are trying to create a report of how many times a dashboard is being accessed in last 24 hrs in splunk. Can you please help us create a query to achieve this report. Your kind inputs are... See more...
Hi All, We are trying to create a report of how many times a dashboard is being accessed in last 24 hrs in splunk. Can you please help us create a query to achieve this report. Your kind inputs are highly appreciated..!!   Thank you..!!
Can we create a customized landing page for an existing Splunk application with some background color/image? And traverse to the dashboard present in that app with the help of some dropdown or butto... See more...
Can we create a customized landing page for an existing Splunk application with some background color/image? And traverse to the dashboard present in that app with the help of some dropdown or button? If yes, how can we do it in classic Splunk.  Is it possible to do this with the help of dashboard studio?