All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @nithys, please try this regex: | rex "(?ms)statusCode: (?<status_code>\d+)" that you can test at https://regex101.com/r/Nfgp6r/1 Ciao. Giuseppe
Hi @gcusello  I tried below query but its not fetching the correct counts of each statuscode...If i want to capture other statuscode greater than 400 ,>500 how should i include it  index="**" sourc... See more...
Hi @gcusello  I tried below query but its not fetching the correct counts of each statuscode...If i want to capture other statuscode greater than 400 ,>500 how should i include it  index="**" source="****" | rex "\"statusCode\":(?<statusCode>[\d]*)" | stats count by statusCode | eval statusCode =case(statusCode="200","success",statusCode="500","Internal Server Error",statusCode="400","Bad Request") | table statusCode,count
You can use the mvexpand command to separate the multi-value fields into separate rows.  First, however, you must preserve the relation between the field values by converting them into single-value t... See more...
You can use the mvexpand command to separate the multi-value fields into separate rows.  First, however, you must preserve the relation between the field values by converting them into single-value tuples.  Do that using mvzip then break the tuples apart using split. ... | rex field=_raw "Disk\sUtilization\sfor\s(?P<Server>[^\s]+)\sin\s(?P<Region>[^\s]+)\s(?P<Environment>[^\s]+)\s\-(?P<Server_IP>[^\s]+)\s\<" | rex field=_raw max_match=0 "\<tr\>\<td\>\s(?P<Filesystem>[^\s]+)\s\<\/td\>\<td\>\s(?P<Type>[^\s]+)\s\<\/td\>\<td\>\s(?P<Blocks>[^\s]+)\s\<\/td\>\<td\>\s(?P<Used>[^\s]+)\s\<\/td\>\<td\>\s(?P<Available>[^\s]+)\s\<\/td\>\<td\sbgcolor\=\w+\>\s(?P<Usage>[^\%]+)\%\s\<\/td\>\<td\>\s(?P<Mounted_On>[^\s]+)\s\<\/td\>\<\/tr\>" ``` Combine related values ``` | eval tuple = mvzip(Filesystem, mvzip(Type, mvzip(Blocks, mvzip(Used, mvzip(Available, mvzip(Usage, Mounted_On)))))) ``` Create a new row for each tuple ``` | mvexpand tuple ``` Break the tuple apart ``` | eval tuple = split(tuple, ",") | eval Filesystem = mvindex(tuple,0), Type = mvindex(tuple,1), Blocks = mvindex(tuple, 2), Used = mvindex(tuple,3), Available = mvindex(tuple, 4), Usage = mvindex(tuple, 5), Mounted_On = mvindex(tuple, 6) | table Server,Region,Environment,Server_IP,Filesystem,Type,Blocks,Used,Available,Usage,Mounted_On | dedup Server,Region,Environment,Server_IP  
Hi, i want to create sensitive table. i want to show how many errors happen in average in each time interval i wrote the following code and it works ok: | eval time = strptime(TimeStamp, "%Y-%m-... See more...
Hi, i want to create sensitive table. i want to show how many errors happen in average in each time interval i wrote the following code and it works ok: | eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%Q") | bin span=1d time | stats sum(SumTotalErrors) as sumErrors by time | eval readable_time = strftime(time, "%Y-%m-%d %H:%M:%S") | stats avg(sumErrors) now, i want: 1. add generic loop to calculate avg for span of 1m,2m,3m,5n,1h,... and present all in a table. i tried to replace 1d by parameter but i haven't succeed yet. 2. give option to user to insert his desired span in dashboard and calculate the avg errors for him. how can i do that? Thanks , Maayan
Hi Splunkers,   we have ingested Threat Intelligence Feeds from Group-IB  into Splunk, we want to benefit from this data as much as possible.   I want to understand how Splunk ES consumes this da... See more...
Hi Splunkers,   we have ingested Threat Intelligence Feeds from Group-IB  into Splunk, we want to benefit from this data as much as possible.   I want to understand how Splunk ES consumes this data? Do we need to enforce Splunk ES to use this data and alert us in case a match happens or Splunk ES uses this data without our interaction? are we required to create custom correlation rules and configure the adaptive response action or what?
Hi All, I got a logs like below and I need to create a table out of it.   <p align='center'><font size='4' color=blue>Disk Utilization for gcgnamslap in Asia Testing -10.100.158.51 </font></p> <ta... See more...
Hi All, I got a logs like below and I need to create a table out of it.   <p align='center'><font size='4' color=blue>Disk Utilization for gcgnamslap in Asia Testing -10.100.158.51 </font></p> <table align=center border=2> <tr style=background-color:#2711F0 ><th>Filesystem</th><th>Type</th><th>Blocks</th><th>Used %</th><th>Available %</th><th>Usage %</th><th>Mounted on</th></tr> <tr><td> /dev/root </td><td> ext3 </td><td> 5782664 </td><td> 1807636 </td><td> 3674620 </td><td bgcolor=red> 33% </td><td> / </td></tr> <tr><td> devtmpfs </td><td> devtmpfs </td><td> 15872628 </td><td> 0 </td><td> 15872628 </td><td bgcolor=white> 0% </td><td> /dev </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 10580284 </td><td> 5298356 </td><td bgcolor=red> 67% </td><td> /dev/shm </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 26984 </td><td> 15851656 </td><td bgcolor=white> 1% </td><td> /run </td></tr> <tr><td> tmpfs </td><td> tmpfs </td><td> 15878640 </td><td> 0 </td><td> 15878640 </td><td bgcolor=white> 0% </td><td> /sys/fs/cgroup </td></tr> <tr><td> /dev/md1 </td><td> ext3 </td><td> 96922 </td><td> 36667 </td><td> 55039 </td><td bgcolor=red> 40% </td><td> /boot </td></tr> <tr><td> /dev/md6 </td><td> ext3 </td><td> 62980468 </td><td> 28501072 </td><td> 31278452 </td><td bgcolor=red> 48% </td><td> /usr/sw </td></tr> <tr><td> /dev/mapper/cp1 </td><td> ext4 </td><td> 1126568640 </td><td> 269553048 </td><td> 800534468 </td><td bgcolor=white> 26% </td><td> /usr/p1 </td></tr> <tr><td> /dev/mapper/cp2 </td><td> ext4 </td><td> 1126568640 </td><td> 85476940 </td><td> 984610576 </td><td bgcolor=white> 8% </td><td> /usr/p2 </td></tr> </table></body></html>   I used below query to get the table:   ... | rex field=_raw "Disk\sUtilization\sfor\s(?P<Server>[^\s]+)\sin\s(?P<Region>[^\s]+)\s(?P<Environment>[^\s]+)\s\-(?P<Server_IP>[^\s]+)\s\<" | rex field=_raw max_match=0 "\<tr\>\<td\>\s(?P<Filesystem>[^\s]+)\s\<\/td\>\<td\>\s(?P<Type>[^\s]+)\s\<\/td\>\<td\>\s(?P<Blocks>[^\s]+)\s\<\/td\>\<td\>\s(?P<Used>[^\s]+)\s\<\/td\>\<td\>\s(?P<Available>[^\s]+)\s\<\/td\>\<td\sbgcolor\=\w+\>\s(?P<Usage>[^\%]+)\%\s\<\/td\>\<td\>\s(?P<Mounted_On>[^\s]+)\s\<\/td\>\<\/tr\>" | table Server,Region,Environment,Server_IP,Filesystem,Type,Blocks,Used,Available,Usage,Mounted_On | dedup Server,Region,Environment,Server_IP   And below is the table  I am getting: Server Region Environment Server_IP Filesystem Type Blocks Used Available Usage Mounted_On gcgnamslap Asia Testing 10.100.158.51 /dev/root devtmpfs tmpfs tmpfs tmpfs /dev/md1 /dev/md6 /dev/mapper/p1 /dev/mapper/p2 ext3 devtmpfs tmpfs tmpfs tmpfs ext3 ext3 ext4 ext4 5782664 15872628 15878640 15878640 15878640 96922 62980468 1126568640 1126568640 1807636 0 10580284 26984 0 36667 28501072 269553048 85476940 3674620 15872628 5298356 15851656 15878640 55039 31278452 800534468 984610576 33 0 67 1 0 40 48 26 8 / /dev /dev/shm /run /sys/fs/cgroup /boot /usr/sw /usr/p1 /usr/p2 Here, the fields Filesystem,Type,Blocks,Used,Available,Usage_Percent and Mounted_On are coming up in one row. I want the table to be separated according to the rows like below: Server Region Environment Server_IP Filesystem Type Blocks Used Available Usage Mounted_On gcgnamslap Asia Testing 10.100.158.51 /dev/root ext3 5782664 1807636 3674620 33 / gcgnamslap Asia Testing 10.100.158.51 devtmpfs devtmpfs 15872628 0 15872628 0 /dev gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 10580284 5298356 67 /dev/shm gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 26984 15851656 1 /run gcgnamslap Asia Testing 10.100.158.51 tmpfs tmpfs 15878640 1807636 15878640 0 /sys/fs/cgroup gcgnamslap Asia Testing 10.100.158.51 /dev/md1 ext3 96922 36667 55039 40 /boot gcgnamslap Asia Testing 10.100.158.51 /dev/md6 ext3 62980468 28501072 31278452 48 /usr/sw gcgnamslap Asia Testing 10.100.158.51 /dev/mapper/p1 ext4 1126568640 269553048 800534468 26 /usr/p1 gcgnamslap Asia Testing 10.100.158.51 /dev/mapper/p2 ext4 1126568640 85476940 984610576 8  /usr/p2 Please help to create a query to get the table in the above expected manner. Your kind inputs are highly appreciated..!! Thank You..!!
Hi @Vianapp, at first, Splunk isn't a DB where you can modify field values, Splunk is a log monitor where logs are indexed and not more updated. You can modify the Correlation Searches adding all t... See more...
Hi @Vianapp, at first, Splunk isn't a DB where you can modify field values, Splunk is a log monitor where logs are indexed and not more updated. You can modify the Correlation Searches adding all the fields you need from the events, but users cannot update them as in a DB. In the Notable Events investigation, you can add notes, but not modifying field values, managed using lookups and Summary Indexes. I hint to follow a training on ES before starting to use it: Splunk thinks different than the other systems. Ciao. Giuseppe
Hi @aguilard, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @aguilard, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @nithys, this seems to be a json format so you can extract all fields using INDEXED_EXTRACTIONS = JSON in the sourcetype or using the spath command (https://docs.splunk.com/Documentation/Splunk/9... See more...
Hi @nithys, this seems to be a json format so you can extract all fields using INDEXED_EXTRACTIONS = JSON in the sourcetype or using the spath command (https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath). Then you can use the timechart command to have the time distribution od the error codes. Ciao. Giuseppe
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the resp... See more...
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the respective time Event 1:400 error { [-]    body: { [-]      message: [ [-]        { [-]          errorMessage: must have required property 'objectIds'          field: objectIds        }        { [-]          errorMessage: must be equal to one of the allowed values : [object1,object2]          field: objectType        }        statusCode: 400      type: BAD_REQUEST_ERROR    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }   hostname:     level: 50    msg: republish error response    statusCode: 400   time: **** }   Event 2:500 Error { [-]    awsRequestId:     body: { [-]      message: Unexpected token “ in JSON at position 98    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    msg: reprocess error response   statusCode: 500    time: *** } Event 3:Success { [-]    awsRequestId:     body: { [-]      message: republish request has been submitted for [1] ids    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    }    headers: { [+]    }    msg: republish success response    statusCode: 200    time: *** }
Thanks @bowesmana : I was able to achieve using case statement and the regex you gave. Thanks a lot  
Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the l... See more...
Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the log. I am not quite sure what's causing the issue here. The log event is working fine in other environments and displaying in the separate event log, so I can keep track of the event field name. But in the production environment, my logs are mixed with console.log and treated as one event instead. It looks something like this right here. (Just an example, but looks similar).   I am new to Splunk Enterprise, and I am not quite sure where my configuration file is located. It's ok if there's no solution, but I would like to hear some advice from the expert from Splunk, on what may be causing this happening.  
Hi @gcusello  1.With below query  i am trying to fetch three fields from three different event logs which match all 3 condition CASE is used get the extact uppercase/lowercase match of "latest,mat... See more...
Hi @gcusello  1.With below query  i am trying to fetch three fields from three different event logs which match all 3 condition CASE is used get the extact uppercase/lowercase match of "latest,material" from first log event "id,material" from second log event "dynamoDB data retrieved for ids,dataNotFoundIdsCount,material" from third log event  from third event log   CASE("latest") AND "id" AND "dynamoDB data retrieved for ids" AND "material"   Based on above condition                             PST_TIME4  objectType version republishType  publish nspConsumerList snsPublishedCount dataNotFoundIdsCount 2023-20-11 02:55:12 material latest id NSP ALL 3 1 2023-16-11 09:18:14 material latest id NSP ALL 3 1 2023-12-12 05:03:37 material latest id ALL ALL 1 2 2.CASE("latest") AND "id" AND "sns published count" AND "material" Appendcols is used to fetch sns published count,publish,version,republishInput along with other filter condition latest,id,material    
Note that Request.body is an array, which is flattened as multivalue.  This means that the any field inside Request.body is also multivalued.  The code should handle this.  The most common method is ... See more...
Note that Request.body is an array, which is flattened as multivalue.  This means that the any field inside Request.body is also multivalued.  The code should handle this.  The most common method is to add mvexpand against the array.   | spath input=Request.body path={} | mvexpand {} | spath input={}   Using the same emulation @dtburrows3 provides, the output is ParentId Request.body Requet.hostname Request.type depEndDate depStartData recordLocator {}   [ { "recordLocator": "RYVBNQ", "depStartDate": "2023-12-14T14:00:19.671Z", "depEndDate": "2023-12-15T09:20:19.671Z" } ] IT-SALI RequestLogDTO 2023-12-15T09:20:19.671Z 2023-12-14T14:00:19.671Z RYVBNQ { "recordLocator": "RYVBNQ", "depStartDate": "2023-12-14T14:00:19.671Z", "depEndDate": "2023-12-15T09:20:19.671Z" }  
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field ... See more...
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field in the notable event that analysts can edit. For example, when creating cases in TheHive, I include the desired field, and analysts set the value when they take the case for processing. Despite studying, I couldn't figure out how to implement this in a notable event so that analysts can provide inputs such as identifying the technology involved or deciding whether it should be forwarded. This would help me use it for auditing purposes later on. Is it possible to achieve this in ES?
Hi @Vox,   I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend tha... See more...
Hi @Vox,   I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.   Thank you! 
Screenshots aren't particularly helpful, it is better to paste your search into a code block </>. Also, providing sample (anonymised) events or representative dummy events, again in a code block, al... See more...
Screenshots aren't particularly helpful, it is better to paste your search into a code block </>. Also, providing sample (anonymised) events or representative dummy events, again in a code block, also helps. Having said that, does something like this help: | bin span=7d _time aligntime=earliest | stats max(version) as latestversion by ComputerName, _time | rex field=latestversion "(?<latestversionT>\d{6})" | eval today_date=strftime(now(),"%d-%m-%y") ``` No longer required? ``` | eval today_DAT=strftime(now(),"%y%m%d") ``` No longer required? ``` | eval diff = floor((relative_time(now(),"@d") - strptime(latestversionT, "%y%m%d"))/86400) | eval status = if(diff<=7,"Compliant","Non-Compliant") I generated some dummy sample data like this: | gentimes start=-30 increment=1h | rename starttime as _time | fields _time | eval ComputerName=mvindex(split("ABCDEFGHIJ",""),random()%10) | eval version=strftime(relative_time(_time,"-".(random()%5+1)."d"),"%y%m%d").printf("%03d",(random()%100))
Hello Is it possible for someone to help? I entered the log information into program, but the graphs do not show anything.  
I fixed the problem simply restarting the cluster and I worked Thanks
Hi @AL3Z, avoid to use realtime! what's your requirement: one day, use one day, there isn't a reccomandation, it's only related to your requirement. Ciao. Giuseppe