All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search criteria with extraction, It seems to be extracting the value. But it's showing up in it's own column.      index=moogsoft "Return from ServiceNow (" | rex "Return from ServiceN... See more...
I have a search criteria with extraction, It seems to be extracting the value. But it's showing up in it's own column.      index=moogsoft "Return from ServiceNow (" | rex "Return from ServiceNow \((?<delay>\d+) seconds\)"     In the results page, I am only seeing the timestamp, the event, the extracted delay variable below the event. How do I display so the delay shows up in it's own column next to the event.    
Hello Good Day! I have the events in the raw data where i want to extract the drive information  into few field and convert into gb  event1:C:\Windows\system FreeSpace DeviceID FreeSpace C: 362... See more...
Hello Good Day! I have the events in the raw data where i want to extract the drive information  into few field and convert into gb  event1:C:\Windows\system FreeSpace DeviceID FreeSpace C: 36247773184 96900616192 E: 26285309952 event2:C:\Windows\system DeviceID FreeSpace C: 36247773184 96900616192 event3:C:\Windows\system DeviceID FreeSpace C: 36247773184 event4:C: 36247773184 96900616192 E: 26285309952 My Query: index=A |rex "(?<Drive>\S+:\s+\d+)" |stats values(Drive) by host _raw My output: Host _raw Drive A1 C:\Windows\system FreeSpace DeviceID FreeSpace C: 36247773184 96900616192 E: 26285309952 C: 36247773184 A2 C:\Windows\system FreeSpace DeviceID FreeSpace C: 36247773184 96900616192   C: 36247773184 I am getting only first values .But i want to get a the values from the raw event and want to convert the digital value into gb Please help me on that Thank you Veeru "Happy Splunking"    
Hello, I'm trying to set up the Cisco Security Suite app, but it displays the 500 internal Server Error when I click to set up. Install all necessary TAs.   Thanks.
Hello, I am working on an architecture drawing for Splunk and when i downloaded the visio stencil from splunk docs and trying to import it, it is not loading.did anyone faced such similar issue ear... See more...
Hello, I am working on an architecture drawing for Splunk and when i downloaded the visio stencil from splunk docs and trying to import it, it is not loading.did anyone faced such similar issue earlier? Any suggestions would be appreaciated?   Thanks
Hello, I am unable to login to splunk answers account since past 3 weeks where it has taken my credentials this time.:) when i gave my username and password, it will send a password reset link whic... See more...
Hello, I am unable to login to splunk answers account since past 3 weeks where it has taken my credentials this time.:) when i gave my username and password, it will send a password reset link which i never received.I spoke to customer support and they have updated the email id in the backend to receive the reset link but no luck.anybody ever faced this sort of issue?if yes what are the steps that you have taken to mitigate this issue? I am not sure whether it logs me back in again.   Thanks
Hello All, I have integrated UF with splunk v8.2 but getting unnecessary host from where I'm getting logs. Not sure how they started sending logs. Is there a way I can stop and check it, why it sta... See more...
Hello All, I have integrated UF with splunk v8.2 but getting unnecessary host from where I'm getting logs. Not sure how they started sending logs. Is there a way I can stop and check it, why it started and how I can stop them? Below screenshot for reference   
Lets just say I have multiple events like this: names John Sam Todd favorite_colors Blue Yellow Green Each event might have a different number of field value... See more...
Lets just say I have multiple events like this: names John Sam Todd favorite_colors Blue Yellow Green Each event might have a different number of field values but the ratio of names to favorite_colors is 1:1. Is it possible to extract these into new events or display them separately in a table like this: name favourite_color John Blue Sam Yellow Todd Green   I have tried mvexpand but that only works for 1 multivalue field.
We have a PCI requirement to disable TLS1.1 or TLS1.0 cipher suites such as - TLSv1.0 TLS_DHE_RSA_WITH_AES_128_CBC_SHA - TLSv1.0 TLS_DHE_RSA_WITH_AES_256_CBC_SHA - TLSv1.0 TLS_DHE_RSA_WITH_CAME... See more...
We have a PCI requirement to disable TLS1.1 or TLS1.0 cipher suites such as - TLSv1.0 TLS_DHE_RSA_WITH_AES_128_CBC_SHA - TLSv1.0 TLS_DHE_RSA_WITH_AES_256_CBC_SHA - TLSv1.0 TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA - TLSv1.0 TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA - TLSv1.1 TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA - TLSv1.1 TLS_DHE_RSA_WITH_AES_128_CBC_SHA - TLSv1.1 TLS_DHE_RSA_WITH_AES_256_CBC_SHA Among others...   I checked a few docs and tested disabling anything less then TLS 1.2 in sslVersions = tls1.2 https://docs.splunk.com/Documentation/Splunk/8.2.6/Security/SetyourSSLversion   How can i be sure the above cipher suites are disabled and TLS 1.2 is the only allowed? from previous posts i read we can use openssl to test via and look for any errors or the full certificate response if its open? openssl s_client -connect ipaddress:port -tls1_1our currrent server.conf is as follows Here is our current server.conf [sslConfig] sslVersions = *,-ssl2 sslVersionsForClient = *,-ssl2 cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH
Sorry team to bother you again, i have a code that is giving me issues | eval InT = (strptime('LastPickupDate',"%m-%d-%Y %H:%M:%S")) + (('DaysOfARVRefil'+ 28)*86400) | stats list(InT) by Facility... See more...
Sorry team to bother you again, i have a code that is giving me issues | eval InT = (strptime('LastPickupDate',"%m-%d-%Y %H:%M:%S")) + (('DaysOfARVRefil'+ 28)*86400) | stats list(InT) by FacilityName but the column InT is all blank, again how do i convert InT back to readable date and list them by facility names  many thanks osita
Gurus I have an infoblox query that simply measures total amount of queries over a certain period by host for a given infoblox cluster. They are usually pretty uneven (25%  : 75%) I can use that ... See more...
Gurus I have an infoblox query that simply measures total amount of queries over a certain period by host for a given infoblox cluster. They are usually pretty uneven (25%  : 75%) I can use that in a pie chart easily. However, I'm also interested in measuring the "imbalance factor" so that I can rank clusters by most/least imbalanced. I have no clue where to start since I'd need 2 values to do math with but "count" isn't even a field. Is this possible ? Thx thx
I have a use case, which is basically about alerting users for vulnerabilities when we need them to take action This is a centralised pull from tenable so far so good My issue is how to defer and... See more...
I have a use case, which is basically about alerting users for vulnerabilities when we need them to take action This is a centralised pull from tenable so far so good My issue is how to defer and control the sending of the alert so it doesn't wake up people in various time zones around the world. I don't want them getting alerts at 2am or on Sunday in their timezone, unless Sunday is a workday - that's a whole different matter. I looked at ip lookup allitems=true  and can get the timezone, so that is a step forward But I can't seem to find out how to convert the Americas/Vancouver timestamp to an offset of UTC which I can play with I'm sure some of you with global companies must have dealt with this challenge. My understanding is you can get fined in Germany for communicating with employees out of hours. Let just say I manage to determine the correct textual timestamp like Americas\Chicago - how do I translated that to a UTC offset ? of course if anyone can spot what I'm trying to do and has a better way then I'm all ears
The obj is to only sends out alert if the  'low' and 'high' strings both detected more than 5 mins interval. Which means 5 min or less, the alert shld nt process or ignore it. More than 5 mins, proce... See more...
The obj is to only sends out alert if the  'low' and 'high' strings both detected more than 5 mins interval. Which means 5 min or less, the alert shld nt process or ignore it. More than 5 mins, process it and sends out alert if low or high received in the syslog. Currently below was wht configured in the splunk rules for both low and high. But i dont really understand it. Can someone explain how it works? Alert-Water High index="watersb" item="Water Level" | fields watersb_timestamp host machine_id location state status | transaction host maxspan=5m | eval status_count=mvcount(status) | search status_count=1 status=high | eval timestamp=strptime(watersb_timestamp,"%b %d %H:%M:%S") | convert timeformat="%d %b %Y %H:%M:%S" ctime(timestamp) | table timestamp host status machine_id location state   Alert-Water Low index="watersb" item="Water Level" | fields watersb_timestamp host machine_id location state status | transaction host maxspan=5m | eval status_count=mvcount(status) | search status_count=1 status=low | eval timestamp=strptime(watersb_timestamp,"%b %d %H:%M:%S") | convert timeformat="%d %b %Y %H:%M:%S" ctime(timestamp) | table timestamp host status machine_id location state
Hello,   How do we edit the tooltip of a chloropleth map to display an additional column and it's value.  and secondly how do we rename the count column to something else coz the moment we rena... See more...
Hello,   How do we edit the tooltip of a chloropleth map to display an additional column and it's value.  and secondly how do we rename the count column to something else coz the moment we rename it the map doesn't render.
I want a main dashboard to pull results from multiple application dashboards. I do not want to do the same queries in the Main dashboard. Is this possible? example:  <row> <panel> <table> <tit... See more...
I want a main dashboard to pull results from multiple application dashboards. I do not want to do the same queries in the Main dashboard. Is this possible? example:  <row> <panel> <table> <title>Overall_Status</title> <search> <query>index=clo_application_logs host IN (xxxx.com) "Unable to read the file" OR "DB ERROR" OR "JMS Exception Occurred" OR "outOfMemory" OR "ERROR - PricingManager" OR "ERROR - DataService" | stats count | eval Overall_Status=case(count&gt;0,"CRITICAL", 1=1, "NORMAL") | append [search index=clo_application_logs host IN (xxxx.com xxxx.comm) "FAIL" | stats count | eval Overall_Status=case(count&gt;0,"CRITICAL", 1=1, "NORMAL")] | stats count by Overall_Status | eval colour=case(test=="NORMAL", "0", test=="CRITICAL", "1", 2=2, Unknown) | sort - colour | fields Overall_Status| head 1 | appendpipe [stats count | where count="0" | fillnull value="No Results" Overall_Status]</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Overall_Status"> <colorPalette type="map">{"CRITICAL":#DC4E41,"NORMAL":#53A051}</colorPalette> </format> </table> </panel> </row>
Newbie in Splunk here. How do I extract the value zzz@zzz.com(at the end of the below payload) in a new field named "user"?     POST /xxxxx/xxxx/xxx/xxxxx HTTP/1.1\r\nHost: xxxx.xxxx.com\r\nC... See more...
Newbie in Splunk here. How do I extract the value zzz@zzz.com(at the end of the below payload) in a new field named "user"?     POST /xxxxx/xxxx/xxx/xxxxx HTTP/1.1\r\nHost: xxxx.xxxx.com\r\nConnection: Keep-Alive\r\nAccept-Encoding: gzip\r\nCF-IPCountry: US\r\nX-Forwarded-For: 1.1.1.1, 2.2.2.2\r\nCF-RAY: 715ae60ec98f02ce-MIA\r\nContent-Length: 37\r\nX-Forwarded-Proto: https\r\nCF-Visitor: {""scheme"":""https""}\r\nsec-ch-ua: "" Not A;Brand"";v=""99"", ""Chromium"";v=""101"", ""Google Chrome"";v=""101""\r\nsec-ch-ua-mobile: ?1\r\nauthorization: *************\r\ncontent-type: application/json\r\nbundleid: com.xxx.xxxxx\r\naccept: application/json, text/plain, */*\r\nsecurekey: Sssssss==\r\nuser-agent: Mozilla/5.0 (Linux; Android 12; SM-A326U) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Mobile Safari/537.36\r\nsec-ch-ua-platform: ""Android""\r\norigin: https://xxx.com\r\nsec-fetch-site: cross-site\r\nsec-fetch-mode: cors\r\nsec-fetch-dest: empty\r\nreferer: https://myxxx.com/\r\naccept-language: en-US,en;q=0.9\r\nCF-Connecting-IP: 1.1.1.1\r\nCDN-Loop: cloudflare\r\n\r\n{""user"":""zzz@zzz.com""}      
I have some data that's coming in as follows:   "data": { "a": 100, "b": 200 } "data": { "a": 50, "c": 75 } ...       I want to aggregate the values so I end up with a table of the sum of... See more...
I have some data that's coming in as follows:   "data": { "a": 100, "b": 200 } "data": { "a": 50, "c": 75 } ...       I want to aggregate the values so I end up with a table of the sum of values by key:     <search> | chart sum(data.*) as *      This gives me the table: a b c 150 200 75   Now I want to sort the columns by value so that it is in the order b, a, c. It looks like the "sort" keyword sorts by rows and not columns. How would I do this? Note this is an extremely simplified example and the actual data will have tons of keys which are arbitrary uuids and there will be a lot of rows to sum. I need to aggregate and then sort by value to have the highest on the left-hand-side. I would also like to only keep the first n columns. It looks like "head" also works by rows and not columns. Any help would be greatly appreciated. Thanks.
Under the "Compliance" Dashboard in InfoSec App for Splunk there is a number of accounts (AD) that are monitored but that number is different from the accounts monitored under the Health tab. Is this... See more...
Under the "Compliance" Dashboard in InfoSec App for Splunk there is a number of accounts (AD) that are monitored but that number is different from the accounts monitored under the Health tab. Is this normal?  How do I ensure that they both display the proper amount of AD accounts monitored?
Does anyone happen to know if there is a default time range from eventhub input from Splunk Add-on for Microsoft Cloud Services and where the checkpoint value is stored?  I am unable to find out the ... See more...
Does anyone happen to know if there is a default time range from eventhub input from Splunk Add-on for Microsoft Cloud Services and where the checkpoint value is stored?  I am unable to find out the information from https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/Configureeventhubs. Thanks.
Hi, I am struggling with an SPL.  I am trying to create a report which lists the Online status of specific Site/location pending if there is a message received from it. I need the Online (or Offl... See more...
Hi, I am struggling with an SPL.  I am trying to create a report which lists the Online status of specific Site/location pending if there is a message received from it. I need the Online (or Offline) status to be group in a daily format which I have achieved so far with the below SPL.  However, the challenge for me is, when a Site/location goes "Offline", I would like to know the exact hour:min that Last communication was logged.  Currently, the Last_Communication Column is showing me the Date but time is 00:00:00 which I know is not true, I need the exact hour/min the last event got logged for that specific day if it was "Online". Current SPL:       | from datamodel:"mydatamodel" | bin _time span=1d | search field1="comm_message" | eval Online_Status=if(like(Location_field,"xyz"),1,0) | stats sum(Online_Status) AS Message_Counts by _time | eval Online_Status=if(Message_Counts=0,"OFFLINE", "ONLINE") | eval Last_Communication=if(Online_Status="ONLINE",(_time), "OFFLINE") | convert ctime(Last_Communication)         Any help would be greatly appreciated. Thanks
Can you create a query that search for all the logs that got entered in an index for the last 24hours and group it by index? That similar to a table with the number of logs added per index in the per... See more...
Can you create a query that search for all the logs that got entered in an index for the last 24hours and group it by index? That similar to a table with the number of logs added per index in the period of time you select. It would be much appreciated thank you so much for your help:)