All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This is my source code </search>         <option name="charting.chart">column</option>         <option name="charting.drilldown">none</option>         <option name="refresh.display">progressbar</... See more...
This is my source code </search>         <option name="charting.chart">column</option>         <option name="charting.drilldown">none</option>         <option name="refresh.display">progressbar</option>       </chart>
Hello Splunkers !! I am getting below while executing the search . Please let me know why this error occurs and help me fix the issue.  
We need to fetch users activity such as  - 1. when user has accessed Splunk cloud platform 2. activities performed by users in Splunk cloud platform   Please provide the API request to fetch Audi... See more...
We need to fetch users activity such as  - 1. when user has accessed Splunk cloud platform 2. activities performed by users in Splunk cloud platform   Please provide the API request to fetch Audit logs or Event to determine Users activity from Splunk cloud platform
Hello Everyone, Wanted to know how to handle the cookie page in Synthetic scripting while creating a user journey, what command to use to accept/reject cookies any suggestion would be great help. T... See more...
Hello Everyone, Wanted to know how to handle the cookie page in Synthetic scripting while creating a user journey, what command to use to accept/reject cookies any suggestion would be great help. Thank you, Mahendra Shetty
Hi Team, I am looking for the help to get an alert trigger if the latest result of timechart command is 0. Suppose i am running a search for last  8hrs with span=2hrs. so, if the result is some... See more...
Hi Team, I am looking for the help to get an alert trigger if the latest result of timechart command is 0. Suppose i am running a search for last  8hrs with span=2hrs. so, if the result is something like below should raise an alert. 12-18-23 00:00 ---> is "0" and also it should is display if there is "0" events in last 8hrs. as i am getting nothing, if no events during that time. Thank you,  
Hi Everyone, I have a column chart for the below query. As shown in the below screenshot, the x-axis label is sorted in alphabetical order, but my requirement is display it in a static order (critic... See more...
Hi Everyone, I have a column chart for the below query. As shown in the below screenshot, the x-axis label is sorted in alphabetical order, but my requirement is display it in a static order (critical,high,medium,low,informational) and in additional can we have unique color for the bar for each x-axis label (ex:critical:red, high:green). Can someone guide me on how to implement these changes. Appreciate your help in advance!!   Query: `notable` | stats count by urgency
I have a search that returns a list of users and the country logins have occurred from grouped by user. index=o365 UserloginFailed* | iplocation ClientIP | search Country!=Australia | stats values(... See more...
I have a search that returns a list of users and the country logins have occurred from grouped by user. index=o365 UserloginFailed* | iplocation ClientIP | search Country!=Australia | stats values(Country) by user So if a user logins from one Country, then a get a single record for the user (user, Country).  If a user logins in from multiple locations, I get the user name in one column and a list of the source locations in the values(County) column. I would like to construct the search so that only see those users who have logins from multiple Countries. Thanks
Hello Everyone, I have created an alert who looks for the security events for few applications and if the condition matches it must notify users related to that specific application. Let's say we h... See more...
Hello Everyone, I have created an alert who looks for the security events for few applications and if the condition matches it must notify users related to that specific application. Let's say we have applications A, B and Application A has a field users with values test, test2, test3. and Application B has a field users with values test4, test5, test6, If Application A has any security breach events it must send an email to users. Regards, Sai
Hi, i want to create sensitive table. i want to show how many errors happen in average in each time interval i wrote the following code and it works ok: | eval time = strptime(TimeStamp, "%Y-%m-... See more...
Hi, i want to create sensitive table. i want to show how many errors happen in average in each time interval i wrote the following code and it works ok: | eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%Q") | bin span=1d time | stats sum(SumTotalErrors) as sumErrors by time | eval readable_time = strftime(time, "%Y-%m-%d %H:%M:%S") | stats avg(sumErrors) now, i want: 1. add generic loop to calculate avg for span of 1m,2m,3m,5n,1h,... and present all in a table. i tried to replace 1d by parameter but i haven't succeed yet. 2. give option to user to insert his desired span in dashboard and calculate the avg errors for him. how can i do that? Thanks , Maayan
Hi Splunkers,   we have ingested Threat Intelligence Feeds from Group-IB  into Splunk, we want to benefit from this data as much as possible.   I want to understand how Splunk ES consumes this da... See more...
Hi Splunkers,   we have ingested Threat Intelligence Feeds from Group-IB  into Splunk, we want to benefit from this data as much as possible.   I want to understand how Splunk ES consumes this data? Do we need to enforce Splunk ES to use this data and alert us in case a match happens or Splunk ES uses this data without our interaction? are we required to create custom correlation rules and configure the adaptive response action or what?
Hi All, I got a logs like below and I need to create a table out of it.   <p align='center'><font size='4' color=blue>Disk Utilization for gcgnamslap in Asia Testing -10.100.158.51 </font></p> <ta... See more...
Hi All, I got a logs like below and I need to create a table out of it.   <p align='center'><font size='4' color=blue>Disk Utilization for gcgnamslap in Asia Testing -10.100.158.51 </font></p> <table align=center border=2> <tr style=background-color:#2711F0 ><th>Filesystem</th><th>Type</th><th>Blocks</th><th>Used %</th><th>Available %</th><th>Usage %</th><th>Mounted on</th></tr> <tr><td> /dev/root </td><td> ext3 </td><td> 5782664 </td><td> 1807636 </td><td> 3674620 </td><td bgcolor=red> 33% </td><td> / </td></tr> <tr><td> devtmpfs </td><td> devtmpfs </td><td> 15872628 </td><td> 0 </td><td> 15872628 </td><td bgcolor=white> 0% </td><td> /dev </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 10580284 </td><td> 5298356 </td><td bgcolor=red> 67% </td><td> /dev/shm </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 26984 </td><td> 15851656 </td><td bgcolor=white> 1% </td><td> /run </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 0 </td><td> 15878640 </td><td bgcolor=white> 0% </td><td> /sys/fs/cgroup </td></tr> <tr><td> /dev/md1 </td><td> ext3 </td><td> 96922 </td><td> 36667 </td><td> 55039 </td><td bgcolor=red> 40% </td><td> /boot </td></tr> <tr><td> /dev/md6 </td><td> ext3 </td><td> 62980468 </td><td> 28501072 </td><td> 31278452 </td><td bgcolor=red> 48% </td><td> /usr/sw </td></tr> <tr><td> /dev/mapper/cp1 </td><td> ext4 </td><td> 1126568640 </td><td> 269553048 </td><td> 800534468 </td><td bgcolor=white> 26% </td><td> /usr/p1 </td></tr> <tr><td> /dev/mapper/cp2 </td><td> ext4 </td><td> 1126568640 </td><td> 85476940 </td><td> 984610576 </td><td bgcolor=white> 8% </td><td> /usr/p2 </td></tr> </table></body></html>   I used below query to get the table:   ... | rex field=_raw "Disk\sUtilization\sfor\s(?P<Server>[^\s]+)\sin\s(?P<Region>[^\s]+)\s(?P<Environment>[^\s]+)\s\-(?P<Server_IP>[^\s]+)\s\<" | rex field=_raw max_match=0 "\<tr\>\<td\>\s(?P<Filesystem>[^\s]+)\s\<\/td\>\<td\>\s(?P<Type>[^\s]+)\s\<\/td\>\<td\>\s(?P<Blocks>[^\s]+)\s\<\/td\>\<td\>\s(?P<Used>[^\s]+)\s\<\/td\>\<td\>\s(?P<Available>[^\s]+)\s\<\/td\>\<td\sbgcolor\=\w+\>\s(?P<Usage>[^\%]+)\%\s\<\/td\>\<td\>\s(?P<Mounted_On>[^\s]+)\s\<\/td\>\<\/tr\>" | table Server,Region,Environment,Server_IP,Filesystem,Type,Blocks,Used,Available,Usage,Mounted_On | dedup Server,Region,Environment,Server_IP   And below is the table  I am getting: Server Region Environment Server_IP Filesystem Type Blocks Used Available Usage Mounted_On gcgnamslap Asia Testing 10.100.158.51 /dev/root devtmpfs tmpfs tmpfs tmpfs /dev/md1 /dev/md6 /dev/mapper/p1 /dev/mapper/p2 ext3 devtmpfs tmpfs tmpfs tmpfs ext3 ext3 ext4 ext4 5782664 15872628 15878640 15878640 15878640 96922 62980468 1126568640 1126568640 1807636 0 10580284 26984 0 36667 28501072 269553048 85476940 3674620 15872628 5298356 15851656 15878640 55039 31278452 800534468 984610576 33 0 67 1 0 40 48 26 8 / /dev /dev/shm /run /sys/fs/cgroup /boot /usr/sw /usr/p1 /usr/p2 Here, the fields Filesystem,Type,Blocks,Used,Available,Usage_Percent and Mounted_On are coming up in one row. I want the table to be separated according to the rows like below: Server Region Environment Server_IP Filesystem Type Blocks Used Available Usage Mounted_On gcgnamslap Asia Testing 10.100.158.51 /dev/root ext3 5782664 1807636 3674620 33 / gcgnamslap Asia Testing 10.100.158.51 devtmpfs devtmpfs 15872628 0 15872628 0 /dev gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 10580284 5298356 67 /dev/shm gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 26984 15851656 1 /run gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 1807636 15878640 0 /sys/fs/cgroup gcgnamslap Asia Testing 10.100.158.51 /dev/md1 ext3 96922 36667 55039 40 /boot gcgnamslap Asia Testing 10.100.158.51 /dev/md6 ext3 62980468 28501072 31278452 48 /usr/sw gcgnamslap Asia Testing 10.100.158.51 /dev/mapper/p1 ext4 1126568640 269553048 800534468 26 /usr/p1 gcgnamslap Asia Testing 10.100.158.51 /dev/mapper/p2 ext4 1126568640 85476940 984610576 8  /usr/p2 Please help to create a query to get the table in the above expected manner. Your kind inputs are highly appreciated..!! Thank You..!!
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the resp... See more...
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the respective time Event 1:400 error { [-]    body: { [-]      message: [ [-]        { [-]          errorMessage: must have required property 'objectIds'          field: objectIds        }        { [-]          errorMessage: must be equal to one of the allowed values : [object1,object2]          field: objectType        }        statusCode: 400      type: BAD_REQUEST_ERROR    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }   hostname:     level: 50    msg: republish error response    statusCode: 400   time: **** }   Event 2:500 Error { [-]    awsRequestId:     body: { [-]      message: Unexpected token “ in JSON at position 98    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    msg: reprocess error response   statusCode: 500    time: *** } Event 3:Success { [-]    awsRequestId:     body: { [-]      message: republish request has been submitted for [1] ids    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    }    headers: { [+]    }    msg: republish success response    statusCode: 200    time: *** }
Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the l... See more...
Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the log. I am not quite sure what's causing the issue here. The log event is working fine in other environments and displaying in the separate event log, so I can keep track of the event field name. But in the production environment, my logs are mixed with console.log and treated as one event instead. It looks something like this right here. (Just an example, but looks similar).   I am new to Splunk Enterprise, and I am not quite sure where my configuration file is located. It's ok if there's no solution, but I would like to hear some advice from the expert from Splunk, on what may be causing this happening.  
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field ... See more...
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field in the notable event that analysts can edit. For example, when creating cases in TheHive, I include the desired field, and analysts set the value when they take the case for processing. Despite studying, I couldn't figure out how to implement this in a notable event so that analysts can provide inputs such as identifying the technology involved or deciding whether it should be forwarded. This would help me use it for auditing purposes later on. Is it possible to achieve this in ES?
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux... See more...
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux machine where HF is installed. If I run python script it will get the data from proxy, if I run the same script with opt/splunk/etc cmd python it is not going to proxy.   Is there any way we can make splunk to honour environment variables. 
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting ... See more...
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting endpoints. Requirement--> Need to generate a AV monthly DAT compliance report.   The criteria for DAT compliance is 7 days. within 7 days system should be updated to latest DAT. Workdone till now--> THere is no intelligenec in data to get the latest DAT from AV-Splunk logs. Only endpoint that are updated with N DAT is coming. I used EVAL command and tied the Latest/today DAT to the today DATE (Used today_date--convert-->today_DAT). Based on that I am able to calculate the DAT compliance for 7 days keeping the today_DAT for the 8th day as reference. This splunk query is able to give correct data for whatever time frame with  the past 7 days compliance only.   Issue--> for past 30 days i.e 25th to 25th of every month, I wanted to divide the logs with 7 days time frame starting from e.g 25th dec, 1 jan,  8th jan 15th jan 22jan  till 25th Jan (last slot less than 7days) and then calculate for each 7 day time frame to know what is the overall compliance on 25th jan. Accordingly calculate the overall 25th dec, 1 jan,  8th jan till 25th Jan  data for a month to give the final report Where stuck--> current query i tried to add the "bin" command for 7 days but unable to tie the latest DAT date (today_DAT date for the 1st Jan) to 7th day for first bin then 8th Jan for second bin so on and so forth In case there is any other method/query to do the same stuff. Kindly let me know   PFA screenshot for your reference @PickleRick @ITWhisperer  @yuanliu 
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is r... See more...
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is receiving. When viewing the log event within splunk, the requestBody stays as string. I was hoping that it could be expanded so that the json fields could be searchable.  As you can see, when i click on "body", the whole line is selected. I am hoping for , for example, "RYVBNQ" to be individually selectable so that i can do searches against that.   
Does the Palo Alto Networks App no longer have a page where you can view and filter out network traffic activity?
Hello everybody  I'm new here and recently I created this :  Ubuntu : splunk server Ubuntu : splunk forwarder  Windows 10 : splunk forwarder  I followed the Splunk How-To video for ubuntu sp... See more...
Hello everybody  I'm new here and recently I created this :  Ubuntu : splunk server Ubuntu : splunk forwarder  Windows 10 : splunk forwarder  I followed the Splunk How-To video for ubuntu splunkfwd : https://www.youtube.com/watch?v=rs6q28xUd-o&t=191s I can see my host in data summary but not in the Forwarder Management : how could you explain it ? I'm thinking about permission maybe so here is :  I also add a deploymentclient.conf in :    /opt/splunkforwarder/etc/system/local/ nano deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri = 192.ipfromserver:8089   Have a great evening 
I have a search as follows: index=*| search sourcetype=*| spath logs{} output=logs| spath serial_number output=serial_number| spath result output=result| table serial_number result| ```sta... See more...
I have a search as follows: index=*| search sourcetype=*| spath logs{} output=logs| spath serial_number output=serial_number| spath result output=result| table serial_number result| ```stats dc(serial_number) as throughput|``` stats count(eval(if(result="Fail",1,null()))) as failures count(eval(if(result="Pass",1,null()))) as passes |   This returns a table shown in the capture with failures=215 and passes=350 how can i get these results as two sperate bars in one bar chart? basically want to show the pass/fail rate     sample of the JSON data i am working with: {"serial_number": "30913JC0024EW1482300425", "type": "Test", "result": "Pass", "logs": [ {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Disable UGC USB Comm Watchdog", "result": "Pass"}, {"test_name": "Hardware Rev", "result": "Pass", "received": "4"}, {"test_name": "Firmware Rev", "result": "Pass", "received": "1.8.3.99", "expected": "1.8.3.99"}, {"test_name": "Set Serial Number", "result": "Pass", "received": "1 A S \n", "expected": "1 A S"}, {"test_name": "Verify serial number", "result": "Pass", "received": "JC0024EW1482300425", "expected": "JC0024EW1482300425", "reason": "Truncated full serial number: 30913JC0024EW1482300425 to JC0024EW1482300425"}, {"test_name": "Thermocouple", "pt1_ugc": "24969.0", "pt1": "25000", "pt2_ugc": "19954.333333333332", "pt2": "20000", "pt3_ugc": "14993.666666666666", "pt3": "15000", "result": "Pass", "tolerance": "1000 deci-mV"}, {"test_name": "Cold Junction", "result": "Pass", "ugc_cj": "278", "user_temp": "270", "tolerance": "+ or - 5 C"}, {"test_name": "Glow Plug Open and Short", "result": "Pass", "received": "GP Open, Short, and Load verified OK.", "expected": "GP Open, Short, and Load verified OK."}, {"test_name": "Glow Plug Power On", "result": "Pass", "received": "User validated Glow Plug Power"}, {"test_name": "Glow Plug Measure", "pt1_ugc": "848", "pt1": "2070", "pt1_tolerance": "2070", "pt2_ugc": "5201", "pt2": "5450", "pt2_tolerance": "2800", "result": "Pass"}, {"test_name": "Motor Soft Start", "result": "Pass", "received": "Motor Soft Start verified", "expected": "Motor Soft Start verified by operator"}, {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"}, {"test_name": "Fan", "ugc_rpm": 2436.0, "rpm": 2130, "rpm_t": 400, "ugc_v": 653.3333333333334, "v": 630, "v_t": 160, "result": "Pass"}, {"test_name": "RS 485", "result": "Pass", "received": "All devices detected", "expected": "Devices detected: ['P']"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "DFU Test", "result": "Pass", "received": "Found DFU device"}, {"test_name": "Power Cycle", "result": "Pass", "received": "User confirmed power cycle"}, {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "USB Power", "result": "Pass", "received": "USB Power manually verified"}]}