All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Sorry team to bother you again, i have a code that is giving me issues | eval InT = (strptime('LastPickupDate',"%m-%d-%Y %H:%M:%S")) + (('DaysOfARVRefil'+ 28)*86400) | stats list(InT) by Facility... See more...
Sorry team to bother you again, i have a code that is giving me issues | eval InT = (strptime('LastPickupDate',"%m-%d-%Y %H:%M:%S")) + (('DaysOfARVRefil'+ 28)*86400) | stats list(InT) by FacilityName but the column InT is all blank, again how do i convert InT back to readable date and list them by facility names  many thanks osita
Gurus I have an infoblox query that simply measures total amount of queries over a certain period by host for a given infoblox cluster. They are usually pretty uneven (25%  : 75%) I can use that ... See more...
Gurus I have an infoblox query that simply measures total amount of queries over a certain period by host for a given infoblox cluster. They are usually pretty uneven (25%  : 75%) I can use that in a pie chart easily. However, I'm also interested in measuring the "imbalance factor" so that I can rank clusters by most/least imbalanced. I have no clue where to start since I'd need 2 values to do math with but "count" isn't even a field. Is this possible ? Thx thx
I have a use case, which is basically about alerting users for vulnerabilities when we need them to take action This is a centralised pull from tenable so far so good My issue is how to defer and... See more...
I have a use case, which is basically about alerting users for vulnerabilities when we need them to take action This is a centralised pull from tenable so far so good My issue is how to defer and control the sending of the alert so it doesn't wake up people in various time zones around the world. I don't want them getting alerts at 2am or on Sunday in their timezone, unless Sunday is a workday - that's a whole different matter. I looked at ip lookup allitems=true  and can get the timezone, so that is a step forward But I can't seem to find out how to convert the Americas/Vancouver timestamp to an offset of UTC which I can play with I'm sure some of you with global companies must have dealt with this challenge. My understanding is you can get fined in Germany for communicating with employees out of hours. Let just say I manage to determine the correct textual timestamp like Americas\Chicago - how do I translated that to a UTC offset ? of course if anyone can spot what I'm trying to do and has a better way then I'm all ears
The obj is to only sends out alert if the  'low' and 'high' strings both detected more than 5 mins interval. Which means 5 min or less, the alert shld nt process or ignore it. More than 5 mins, proce... See more...
The obj is to only sends out alert if the  'low' and 'high' strings both detected more than 5 mins interval. Which means 5 min or less, the alert shld nt process or ignore it. More than 5 mins, process it and sends out alert if low or high received in the syslog. Currently below was wht configured in the splunk rules for both low and high. But i dont really understand it. Can someone explain how it works? Alert-Water High index="watersb" item="Water Level" | fields watersb_timestamp host machine_id location state status | transaction host maxspan=5m | eval status_count=mvcount(status) | search status_count=1 status=high | eval timestamp=strptime(watersb_timestamp,"%b %d %H:%M:%S") | convert timeformat="%d %b %Y %H:%M:%S" ctime(timestamp) | table timestamp host status machine_id location state   Alert-Water Low index="watersb" item="Water Level" | fields watersb_timestamp host machine_id location state status | transaction host maxspan=5m | eval status_count=mvcount(status) | search status_count=1 status=low | eval timestamp=strptime(watersb_timestamp,"%b %d %H:%M:%S") | convert timeformat="%d %b %Y %H:%M:%S" ctime(timestamp) | table timestamp host status machine_id location state
Hello,   How do we edit the tooltip of a chloropleth map to display an additional column and it's value.  and secondly how do we rename the count column to something else coz the moment we rena... See more...
Hello,   How do we edit the tooltip of a chloropleth map to display an additional column and it's value.  and secondly how do we rename the count column to something else coz the moment we rename it the map doesn't render.
I want a main dashboard to pull results from multiple application dashboards. I do not want to do the same queries in the Main dashboard. Is this possible? example:  <row> <panel> <table> <tit... See more...
I want a main dashboard to pull results from multiple application dashboards. I do not want to do the same queries in the Main dashboard. Is this possible? example:  <row> <panel> <table> <title>Overall_Status</title> <search> <query>index=clo_application_logs host IN (xxxx.com) "Unable to read the file" OR "DB ERROR" OR "JMS Exception Occurred" OR "outOfMemory" OR "ERROR - PricingManager" OR "ERROR - DataService" | stats count | eval Overall_Status=case(count&gt;0,"CRITICAL", 1=1, "NORMAL") | append [search index=clo_application_logs host IN (xxxx.com xxxx.comm) "FAIL" | stats count | eval Overall_Status=case(count&gt;0,"CRITICAL", 1=1, "NORMAL")] | stats count by Overall_Status | eval colour=case(test=="NORMAL", "0", test=="CRITICAL", "1", 2=2, Unknown) | sort - colour | fields Overall_Status| head 1 | appendpipe [stats count | where count="0" | fillnull value="No Results" Overall_Status]</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Overall_Status"> <colorPalette type="map">{"CRITICAL":#DC4E41,"NORMAL":#53A051}</colorPalette> </format> </table> </panel> </row>
Newbie in Splunk here. How do I extract the value zzz@zzz.com(at the end of the below payload) in a new field named "user"?     POST /xxxxx/xxxx/xxx/xxxxx HTTP/1.1\r\nHost: xxxx.xxxx.com\r\nC... See more...
Newbie in Splunk here. How do I extract the value zzz@zzz.com(at the end of the below payload) in a new field named "user"?     POST /xxxxx/xxxx/xxx/xxxxx HTTP/1.1\r\nHost: xxxx.xxxx.com\r\nConnection: Keep-Alive\r\nAccept-Encoding: gzip\r\nCF-IPCountry: US\r\nX-Forwarded-For: 1.1.1.1, 2.2.2.2\r\nCF-RAY: 715ae60ec98f02ce-MIA\r\nContent-Length: 37\r\nX-Forwarded-Proto: https\r\nCF-Visitor: {""scheme"":""https""}\r\nsec-ch-ua: "" Not A;Brand"";v=""99"", ""Chromium"";v=""101"", ""Google Chrome"";v=""101""\r\nsec-ch-ua-mobile: ?1\r\nauthorization: *************\r\ncontent-type: application/json\r\nbundleid: com.xxx.xxxxx\r\naccept: application/json, text/plain, */*\r\nsecurekey: Sssssss==\r\nuser-agent: Mozilla/5.0 (Linux; Android 12; SM-A326U) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Mobile Safari/537.36\r\nsec-ch-ua-platform: ""Android""\r\norigin: https://xxx.com\r\nsec-fetch-site: cross-site\r\nsec-fetch-mode: cors\r\nsec-fetch-dest: empty\r\nreferer: https://myxxx.com/\r\naccept-language: en-US,en;q=0.9\r\nCF-Connecting-IP: 1.1.1.1\r\nCDN-Loop: cloudflare\r\n\r\n{""user"":""zzz@zzz.com""}      
I have some data that's coming in as follows:   "data": { "a": 100, "b": 200 } "data": { "a": 50, "c": 75 } ...       I want to aggregate the values so I end up with a table of the sum of... See more...
I have some data that's coming in as follows:   "data": { "a": 100, "b": 200 } "data": { "a": 50, "c": 75 } ...       I want to aggregate the values so I end up with a table of the sum of values by key:     <search> | chart sum(data.*) as *      This gives me the table: a b c 150 200 75   Now I want to sort the columns by value so that it is in the order b, a, c. It looks like the "sort" keyword sorts by rows and not columns. How would I do this? Note this is an extremely simplified example and the actual data will have tons of keys which are arbitrary uuids and there will be a lot of rows to sum. I need to aggregate and then sort by value to have the highest on the left-hand-side. I would also like to only keep the first n columns. It looks like "head" also works by rows and not columns. Any help would be greatly appreciated. Thanks.
Under the "Compliance" Dashboard in InfoSec App for Splunk there is a number of accounts (AD) that are monitored but that number is different from the accounts monitored under the Health tab. Is this... See more...
Under the "Compliance" Dashboard in InfoSec App for Splunk there is a number of accounts (AD) that are monitored but that number is different from the accounts monitored under the Health tab. Is this normal?  How do I ensure that they both display the proper amount of AD accounts monitored?
Does anyone happen to know if there is a default time range from eventhub input from Splunk Add-on for Microsoft Cloud Services and where the checkpoint value is stored?  I am unable to find out the ... See more...
Does anyone happen to know if there is a default time range from eventhub input from Splunk Add-on for Microsoft Cloud Services and where the checkpoint value is stored?  I am unable to find out the information from https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/Configureeventhubs. Thanks.
Hi, I am struggling with an SPL.  I am trying to create a report which lists the Online status of specific Site/location pending if there is a message received from it. I need the Online (or Offl... See more...
Hi, I am struggling with an SPL.  I am trying to create a report which lists the Online status of specific Site/location pending if there is a message received from it. I need the Online (or Offline) status to be group in a daily format which I have achieved so far with the below SPL.  However, the challenge for me is, when a Site/location goes "Offline", I would like to know the exact hour:min that Last communication was logged.  Currently, the Last_Communication Column is showing me the Date but time is 00:00:00 which I know is not true, I need the exact hour/min the last event got logged for that specific day if it was "Online". Current SPL:       | from datamodel:"mydatamodel" | bin _time span=1d | search field1="comm_message" | eval Online_Status=if(like(Location_field,"xyz"),1,0) | stats sum(Online_Status) AS Message_Counts by _time | eval Online_Status=if(Message_Counts=0,"OFFLINE", "ONLINE") | eval Last_Communication=if(Online_Status="ONLINE",(_time), "OFFLINE") | convert ctime(Last_Communication)         Any help would be greatly appreciated. Thanks
Can you create a query that search for all the logs that got entered in an index for the last 24hours and group it by index? That similar to a table with the number of logs added per index in the per... See more...
Can you create a query that search for all the logs that got entered in an index for the last 24hours and group it by index? That similar to a table with the number of logs added per index in the period of time you select. It would be much appreciated thank you so much for your help:)
I'm trying to count the number of sessions (known as sessionId) that have more than 2 intents. (An intent is a field value). And also include the total number of sessions, including sessions with 0 o... See more...
I'm trying to count the number of sessions (known as sessionId) that have more than 2 intents. (An intent is a field value). And also include the total number of sessions, including sessions with 0 or 1 intents. I can't figure out the concept for this query.    index=conversation botId=ccb | eval intent_total=if(intent=*, 1, 0) | stats sum(intent_total) by sessionId | where intent_total > 2 | table sessionId intent_count    
Hi  I am using Cisco WSA proxy and i need help on creating a usecase for Proxy avoindance/bypass  can you please help me with the query 
I want to check if the user is detected in other areas based on IP. Detection criteria are pre-written scenarios. In this scenario, we inserted numbers such as rule_1 and rule_2. The purpose of ... See more...
I want to check if the user is detected in other areas based on IP. Detection criteria are pre-written scenarios. In this scenario, we inserted numbers such as rule_1 and rule_2. The purpose of this task is to see that a user who should be in a specified location is detected in a scenario in another location. This is my log table country title Area detail userlist ip USA WA Washington Washington Monument user1 192.168.0.100 USA WA Washington Washington Monument user2 192.168.0.101 USA VA Virginia   user3 192.168.0.102 USA NJ New Jersey   user4 192.168.0.103   i want view
please i will be glad to get answer to this query | eval  InT = if(((lastpickupdate + DaysOfARVRefil  + 28) > IIT), "Interrupted", "Active") "lastpickupdate" and "IIT" columns are in date format,... See more...
please i will be glad to get answer to this query | eval  InT = if(((lastpickupdate + DaysOfARVRefil  + 28) > IIT), "Interrupted", "Active") "lastpickupdate" and "IIT" columns are in date format, whereas "DaysOfARVRefil" is in Days(int) please how do i successfully run this query  thanks osita
Is there an easy way to implement a recovery alert in the same query as the alert query? For example if I have a system that creates a log file every 10 min if everything is working. I built a quer... See more...
Is there an easy way to implement a recovery alert in the same query as the alert query? For example if I have a system that creates a log file every 10 min if everything is working. I built a query that runs every half an hour and tells me if there is something new in the log location. That part is easy enough but I would also like the same query to be able to send a recovery notification. Or is this not going to be possible because I want to trigger two different actions because from what I can tell you can only configure the one email or slack action per alert?  I did see that there is a splunk addon with VictorOps that has this functionality but I wanted to check here first before I went down that route.
Hi, I am having no luck with a dashboard input restriction. I have a dashboard textbox input that queries a lookup.   For instance, the input could be "hostname".   I want the user to be able to pu... See more...
Hi, I am having no luck with a dashboard input restriction. I have a dashboard textbox input that queries a lookup.   For instance, the input could be "hostname".   I want the user to be able to put in the exact value or partial with a wildcard "*".  So if hostname = 12345ABCD, they could enter the exact or 12345A*  and return all those that match.   BUT I don't want them to just enter hostname = "*"   and pull everything back. Any ideas how to sanitize the inputs so a user cannot just use "*" star? Thank you
I need help to append this rest command to my query. The problem is that the rest command is adding to the first row and I need it to be added to the row that was last entered. | rest /services/au... See more...
I need help to append this rest command to my query. The problem is that the rest command is adding to the first row and I need it to be added to the row that was last entered. | rest /services/authentication/current-context/context | fields + username | search username!=*splunk* | append [ | inputlookup test.csv ] | append [| makeresults | eval user="test", description="test", manager="test", revisit=(now() + 7776000), user_added=now(), token_confirm="$confirm_addition$"] | table username, user, description, user_added, revisit, category, department, description, manager | outputlookup test.csv example: I go to my dashboard and enter user "tom" when I do the rest command should display my username since I entered the user "tom". Now I need to write this to the lookup table so that my name is next to "tom" entry row.
Hi  I am following this documentation from GCP [1], which mentions to omit YOUR_SPLUNK_HEC_URL must not include the HEC endpoint path, for example, /services/collector My question is more specifi... See more...
Hi  I am following this documentation from GCP [1], which mentions to omit YOUR_SPLUNK_HEC_URL must not include the HEC endpoint path, for example, /services/collector My question is more specifically related to this section [2], it mentions that format should be  <protocol>://http-inputs.<host>.splunkcloud.com:<port>/<endpoint> You must add http-inputs- before the <host> which one would be the correct url, for eg https://http-inputs.xxxx.splunkcloud.com:433 or https://http-inputs-xxxx.splunkcloud.com:433 Send data to HTTP Event Collector on Splunk Cloud Platform  [1]https://cloud.google.com/architecture/deploying-production-ready-log-exports-to-splunk-using-dataflow#deploy_the_dataflow_pipeline [2]https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Configure_HTTP_Event_Collector_on_Splunk_Cloud_Platform