All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Vianapp, at first, Splunk isn't a DB where you can modify field values, Splunk is a log monitor where logs are indexed and not more updated. You can modify the Correlation Searches adding all t... See more...
Hi @Vianapp, at first, Splunk isn't a DB where you can modify field values, Splunk is a log monitor where logs are indexed and not more updated. You can modify the Correlation Searches adding all the fields you need from the events, but users cannot update them as in a DB. In the Notable Events investigation, you can add notes, but not modifying field values, managed using lookups and Summary Indexes. I hint to follow a training on ES before starting to use it: Splunk thinks different than the other systems. Ciao. Giuseppe
Hi @aguilard, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @aguilard, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @nithys, this seems to be a json format so you can extract all fields using INDEXED_EXTRACTIONS = JSON in the sourcetype or using the spath command (https://docs.splunk.com/Documentation/Splunk/9... See more...
Hi @nithys, this seems to be a json format so you can extract all fields using INDEXED_EXTRACTIONS = JSON in the sourcetype or using the spath command (https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath). Then you can use the timechart command to have the time distribution od the error codes. Ciao. Giuseppe
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the resp... See more...
   Hi Team, I have below three logs events which gets the statuscode of 200,400,500 in different logs. Need help to find the  status code error rate  for all the diiferent status code with the respective time Event 1:400 error { [-]    body: { [-]      message: [ [-]        { [-]          errorMessage: must have required property 'objectIds'          field: objectIds        }        { [-]          errorMessage: must be equal to one of the allowed values : [object1,object2]          field: objectType        }        statusCode: 400      type: BAD_REQUEST_ERROR    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }   hostname:     level: 50    msg: republish error response    statusCode: 400   time: **** }   Event 2:500 Error { [-]    awsRequestId:     body: { [-]      message: Unexpected token “ in JSON at position 98    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    msg: reprocess error response   statusCode: 500    time: *** } Event 3:Success { [-]    awsRequestId:     body: { [-]      message: republish request has been submitted for [1] ids    }    headers: { [-]      Access-Control-Allow-Origin: *      Content-Type: application/json    }    }    headers: { [+]    }    msg: republish success response    statusCode: 200    time: *** }
Thanks @bowesmana : I was able to achieve using case statement and the regex you gave. Thanks a lot  
Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the l... See more...
Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the log. I am not quite sure what's causing the issue here. The log event is working fine in other environments and displaying in the separate event log, so I can keep track of the event field name. But in the production environment, my logs are mixed with console.log and treated as one event instead. It looks something like this right here. (Just an example, but looks similar).   I am new to Splunk Enterprise, and I am not quite sure where my configuration file is located. It's ok if there's no solution, but I would like to hear some advice from the expert from Splunk, on what may be causing this happening.  
Hi @gcusello  1.With below query  i am trying to fetch three fields from three different event logs which match all 3 condition CASE is used get the extact uppercase/lowercase match of "latest,mat... See more...
Hi @gcusello  1.With below query  i am trying to fetch three fields from three different event logs which match all 3 condition CASE is used get the extact uppercase/lowercase match of "latest,material" from first log event "id,material" from second log event "dynamoDB data retrieved for ids,dataNotFoundIdsCount,material" from third log event  from third event log   CASE("latest") AND "id" AND "dynamoDB data retrieved for ids" AND "material"   Based on above condition                             PST_TIME4  objectType version republishType  publish nspConsumerList snsPublishedCount dataNotFoundIdsCount 2023-20-11 02:55:12 material latest id NSP ALL 3 1 2023-16-11 09:18:14 material latest id NSP ALL 3 1 2023-12-12 05:03:37 material latest id ALL ALL 1 2 2.CASE("latest") AND "id" AND "sns published count" AND "material" Appendcols is used to fetch sns published count,publish,version,republishInput along with other filter condition latest,id,material    
Note that Request.body is an array, which is flattened as multivalue.  This means that the any field inside Request.body is also multivalued.  The code should handle this.  The most common method is ... See more...
Note that Request.body is an array, which is flattened as multivalue.  This means that the any field inside Request.body is also multivalued.  The code should handle this.  The most common method is to add mvexpand against the array.   | spath input=Request.body path={} | mvexpand {} | spath input={}   Using the same emulation @dtburrows3 provides, the output is ParentId Request.body Requet.hostname Request.type depEndDate depStartData recordLocator {}   [ { "recordLocator": "RYVBNQ", "depStartDate": "2023-12-14T14:00:19.671Z", "depEndDate": "2023-12-15T09:20:19.671Z" } ] IT-SALI RequestLogDTO 2023-12-15T09:20:19.671Z 2023-12-14T14:00:19.671Z RYVBNQ { "recordLocator": "RYVBNQ", "depStartDate": "2023-12-14T14:00:19.671Z", "depEndDate": "2023-12-15T09:20:19.671Z" }  
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field ... See more...
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field in the notable event that analysts can edit. For example, when creating cases in TheHive, I include the desired field, and analysts set the value when they take the case for processing. Despite studying, I couldn't figure out how to implement this in a notable event so that analysts can provide inputs such as identifying the technology involved or deciding whether it should be forwarded. This would help me use it for auditing purposes later on. Is it possible to achieve this in ES?
Hi @Vox,   I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend tha... See more...
Hi @Vox,   I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.   Thank you! 
Screenshots aren't particularly helpful, it is better to paste your search into a code block </>. Also, providing sample (anonymised) events or representative dummy events, again in a code block, al... See more...
Screenshots aren't particularly helpful, it is better to paste your search into a code block </>. Also, providing sample (anonymised) events or representative dummy events, again in a code block, also helps. Having said that, does something like this help: | bin span=7d _time aligntime=earliest | stats max(version) as latestversion by ComputerName, _time | rex field=latestversion "(?<latestversionT>\d{6})" | eval today_date=strftime(now(),"%d-%m-%y") ``` No longer required? ``` | eval today_DAT=strftime(now(),"%y%m%d") ``` No longer required? ``` | eval diff = floor((relative_time(now(),"@d") - strptime(latestversionT, "%y%m%d"))/86400) | eval status = if(diff<=7,"Compliant","Non-Compliant") I generated some dummy sample data like this: | gentimes start=-30 increment=1h | rename starttime as _time | fields _time | eval ComputerName=mvindex(split("ABCDEFGHIJ",""),random()%10) | eval version=strftime(relative_time(_time,"-".(random()%5+1)."d"),"%y%m%d").printf("%03d",(random()%100))
Hello Is it possible for someone to help? I entered the log information into program, but the graphs do not show anything.  
I fixed the problem simply restarting the cluster and I worked Thanks
Hi @AL3Z, avoid to use realtime! what's your requirement: one day, use one day, there isn't a reccomandation, it's only related to your requirement. Ciao. Giuseppe
OK, let's take back the argument... upgrading from 7.x to 8.x, new servers, new infrastructure... same annoying message... We saw how to hide warning message icons in dashboards. Well, this is ... See more...
OK, let's take back the argument... upgrading from 7.x to 8.x, new servers, new infrastructure... same annoying message... We saw how to hide warning message icons in dashboards. Well, this is good! Now, how to set a threashold for the ms or remove the message at all from UI?   It becomes very very annoying!!!
Yep! Le't stay as said... if someone else wants to add something, you're welcome, tcp_Kprocessed == Kb received by the receiver as a packet of events kb == the real Kb (compressed) written on Inde... See more...
Yep! Le't stay as said... if someone else wants to add something, you're welcome, tcp_Kprocessed == Kb received by the receiver as a packet of events kb == the real Kb (compressed) written on Indexer storage Explicit and simple, tcp_Kprocessed == the Networking thruput of events packet kb == the Compressed Data written to Indexer Storage of previous packet
Please take look at this blog post : https://www.linkedin.com/pulse/unveiling-secrets-enhancing-adobe-aem-performance-through-kulkarni-xrzec/ Optimizing Adobe AEM Site Performance: A Deep Dive with ... See more...
Please take look at this blog post : https://www.linkedin.com/pulse/unveiling-secrets-enhancing-adobe-aem-performance-through-kulkarni-xrzec/ Optimizing Adobe AEM Site Performance: A Deep Dive with Splunk RZ/ SPL  
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux... See more...
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux machine where HF is installed. If I run python script it will get the data from proxy, if I run the same script with opt/splunk/etc cmd python it is not going to proxy.   Is there any way we can make splunk to honour environment variables. 
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting ... See more...
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting endpoints. Requirement--> Need to generate a AV monthly DAT compliance report.   The criteria for DAT compliance is 7 days. within 7 days system should be updated to latest DAT. Workdone till now--> THere is no intelligenec in data to get the latest DAT from AV-Splunk logs. Only endpoint that are updated with N DAT is coming. I used EVAL command and tied the Latest/today DAT to the today DATE (Used today_date--convert-->today_DAT). Based on that I am able to calculate the DAT compliance for 7 days keeping the today_DAT for the 8th day as reference. This splunk query is able to give correct data for whatever time frame with  the past 7 days compliance only.   Issue--> for past 30 days i.e 25th to 25th of every month, I wanted to divide the logs with 7 days time frame starting from e.g 25th dec, 1 jan,  8th jan 15th jan 22jan  till 25th Jan (last slot less than 7days) and then calculate for each 7 day time frame to know what is the overall compliance on 25th jan. Accordingly calculate the overall 25th dec, 1 jan,  8th jan till 25th Jan  data for a month to give the final report Where stuck--> current query i tried to add the "bin" command for 7 days but unable to tie the latest DAT date (today_DAT date for the 1st Jan) to 7th day for first bin then 8th Jan for second bin so on and so forth In case there is any other method/query to do the same stuff. Kindly let me know   PFA screenshot for your reference @PickleRick @ITWhisperer  @yuanliu 
@gcusello  Which one would be better running it daily or realtime can you pls suggest we are into security specific usecases