All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for all the inputs. Here is the final query index=Github_Webhook source="http:github-dev-token" eventtype="GitHub::Push" sourcetype="json_ae_git-webhook" | rename repository.name as RepoN... See more...
Thank you for all the inputs. Here is the final query index=Github_Webhook source="http:github-dev-token" eventtype="GitHub::Push" sourcetype="json_ae_git-webhook" | rename repository.name as RepoName | spath path=commits{} output=commitscollection | mvexpand commitscollection | fields _time RepoName commitscollection | spath input=commitscollection | table RepoName id added{} modified{} removed{} author.name author.email message | spath path=commits{} output=commitscollection --> Thanks to all the responders. This helps in getting the commits from array Next challenge is, if you pull the data for all the other fields in the same approach, each of those values cannot be mapped with each other. To address this, we should use mvexpand to split them into separate array events Once the array is split into separate events, now, we will use the same logic to split the data into events. Hope this helps.
@ITWhisperer  This is my final query but still fields are not coming from look up file . Source_host and key field in look up file is same . index=*infra* metric_label ="Host : Reporting no data" |... See more...
@ITWhisperer  This is my final query but still fields are not coming from look up file . Source_host and key field in look up file is same . index=*infra* metric_label ="Host : Reporting no data" | bin span=6m@m metric_value as 6_min_data | stats count(eval(metric_value=0)) as uptime count(eval(metric_value=1)) as downtime by 6_min_data, source_host | eval total_uptime = uptime*360 | eval total_dowtime = downtime*360 | eval total_uptime = if(isnull(total_uptime),0,total_uptime) | eval total_downtime = if(isnull(total_dowtime),0, total_dowtime) | eval avg_uptime_perc = round((total_uptime/(total_uptime+total_downtime))*100 ,2) | eval avg_downtim_perc = round((total_downtime/(total_uptime+total_downtime))*100,2) | eval total_uptime = tostring(total_uptime, "duration") | eval total_downtime = tostring(total_downtime, "duration") | rename "total_uptime" as "Total Uptime", "total_downtime" as "Total Downtime", avg_uptime_perc as "Average uptime in %", avg_downtim_perc as "Average Downtime in %" source_host as "Source Host" | lookup *_common_lookup_topology-technical-detail_001.csv key as source_host | table key type system "Source Host" "Total Uptime" "Total Downtime" "Average uptime in %" "Average Downtime in %"
Hi I am trying to create a case at your official support. Our user is "unionub". Required Input field "Select Entitlement" - a combo box - is always empty, and I cannot select anything. So I ... See more...
Hi I am trying to create a case at your official support. Our user is "unionub". Required Input field "Select Entitlement" - a combo box - is always empty, and I cannot select anything. So I cannot go on to create the case. Tried with both Firefox and Chrome please advise Altin Karaulli Security Officer Unionbank
Hi @yuanliu  Thank you again for your suggestion Below I posted my sample search closer to the real search, where I have multiple subnets in "search filter" and additional field filter. When I... See more...
Hi @yuanliu  Thank you again for your suggestion Below I posted my sample search closer to the real search, where I have multiple subnets in "search filter" and additional field filter. When I removed the "search ip filter" and moved it up next to index=risk,  the search is slower 3 seconds, but the results are the same. 1) What is the difference between using "| search ip=" and "ip="?   They give the same outcome 2) Sorry about not mentioning dedup. Because dedup will remove any rows that have empty/null fields, so I put the dedup after join and adding "fillnull" command If I move it to each subsearch, I would need to add fillnull command for each subsearch and it's probably adding a delay.  What do you think? I appreciate your suggestion again. Thanks Before  removing "filter ip"       | inputlookup host.csv | search (ip="10.1.0.0/16" OR ip="10.2.0.0/16" OR ip="10.3.0.0/16" OR ip="10.4.0.0/16" OR ip="10.5.0.0/16" OR ip="10.6.0.0/16") | rename ip_address as ip | join max=0 type=left ip [ search index=risk | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) | search (ip="10.1.0.0/16" OR ip="10.2.0.0/16") AND (company="compA" OR company="compB") ] | join max=0 type=left ip [ search index=risk ip="10.2.0.0/16" | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) | search (ip="10.3.0.0/16" OR ip="10.4.0.0/16") AND (company="compA" OR company="compB") ] | join max=0 type=left ip [ search index=risk ip="10.3.0.0/16" | fields ip risk score contact | search (ip="10.5.0.0/16" OR ip="10.6.0.0/16") AND (company="compA" OR company="compB") ] | fillnull value0 score | fillnull value="N/A" ip risk contact | dedup ip risk score contact | table ip, host, risk, score, contact         After  removing "filter ip"  (3 seconds slower)         | inputlookup host.csv | search (ip="10.1.0.0/16" OR ip="10.2.0.0/16" OR ip="10.3.0.0/16" OR ip="10.4.0.0/16" OR ip="10.5.0.0/16" OR ip="10.6.0.0/16") | rename ip_address as ip | join max=0 type=left ip [ search index=risk (ip="10.1.0.0/16" OR ip="10.2.0.0/16") AND (company="compA" OR company="compB") | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) ] | join max=0 type=left ip [ search index=risk ip="10.2.0.0/16" (ip="10.3.0.0/16" OR ip="10.4.0.0/16") AND (company="compA" OR company="compB") | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) ] | join max=0 type=left ip [ search index=risk ip="10.3.0.0/16" (ip="10.5.0.0/16" OR ip="10.6.0.0/16") AND (company="compA" OR company="compB") | fields ip risk score contact ] | fillnull value0 score | fillnull value="N/A" ip risk contact | dedup ip risk score contact | table ip, host, risk, score, contact          
Default System Timezone is selected in my preferences. I don't think is the problem because my other searches are working fine.
Splunk appears to be interpreting the event timestamp correctly.  The displayed time is based on your selected time zone.  What do you have select in your preferences?
I did the same, but nothing changed TIME_FORMAT = %Y-%m-%dT%H:%M:%S%Z I ran the above search at 5:35 PM and the latest event time is: 12:30, so approx. 5 hours behind
Thank you @PickleRick 
After creating a new LDAP strategy and entering all required information I get an error when saving. Entry not saved, the following error was reported: Syntax Error: Unexpected token< in JSON at pos... See more...
After creating a new LDAP strategy and entering all required information I get an error when saving. Entry not saved, the following error was reported: Syntax Error: Unexpected token< in JSON at position 5    I have verified all entries are correct multiple times.
Hi @zksvc , version 4.38.0 is compatible with Splunk 9.1.x, 9.2.x and 9.3.x versions. Anyway, this app mainly gives you new Use Cases and eventual correct some old use cases, it's described in the ... See more...
Hi @zksvc , version 4.38.0 is compatible with Splunk 9.1.x, 9.2.x and 9.3.x versions. Anyway, this app mainly gives you new Use Cases and eventual correct some old use cases, it's described in the documentation. ciao. Giuseppe
Hi PickleRick, Yes, I'm very aware that the structure I'm working with is really bad  . But let's say if I do pre-process on the data (currently data input is a modular input), how am I able to do ... See more...
Hi PickleRick, Yes, I'm very aware that the structure I'm working with is really bad  . But let's say if I do pre-process on the data (currently data input is a modular input), how am I able to do the same on previous data that's already been indexed? Furthermore, the data is continuous and I'm only able to retrieve up to max 10 days behind (I can't change this unfortunately).  So, if I adjust the pre-processing and make the data structure into something that makes sense, this will only take effect for all future data. If I wanted to display the same table, the SPL won't work with older data in the same index. 
Hi @man03359 , I didn't used this app. Anyway, it uses a summary index (that didn't consume license) and a metric index, that could consume license, 150bytes for each event. About CPU and RAM, sur... See more...
Hi @man03359 , I didn't used this app. Anyway, it uses a summary index (that didn't consume license) and a metric index, that could consume license, 150bytes for each event. About CPU and RAM, surely it will use a part of them, the only way is to install it and monitor your resources. Ciao. Giuseppe
OK. I think I already told you about badly formed data. While in some cases you can argue which json structure will be better to represent your data, this one is clearly not a good approach. Especial... See more...
OK. I think I already told you about badly formed data. While in some cases you can argue which json structure will be better to represent your data, this one is clearly not a good approach. Especially for Splunk. Let's take this snippet: "enemy_information": ["name", "location", "powers" ], "enemy_information_values": [ [ "Doomsday", "Kryptonian Prison", [ "Super Strength", [...] "Immunity to Kryptonite"] ] [...] There is no structural relation between enemy_information and enemy_information_values. From Splunk's point of view those will parse out (leaving aside possibly nested multivalued fields which is not straightforward to deal with) as two separate multivalued fields with no relationship whatsoever between the values from one field and values from the other. If anything it should be either "enemy_attributes": {"name": "Doomsday". "location": "Seattle, WA" [...]}, or "enemy_attributes": [ {"name":"name", "value": "Doomsday"}, {"name": "location", "value":"Paris, France"} ...] Each option has its pros and cons but the one you're presenting only seems to have cons.
Additionally to what @richgalloway already said - you don't need to "convert" timestamps to another timezone. The timestamps are reported by source in some timezone (the timezone info might be includ... See more...
Additionally to what @richgalloway already said - you don't need to "convert" timestamps to another timezone. The timestamps are reported by source in some timezone (the timezone info might be included in the timestamp or not; if it is you can use it, if it is not you have to set it explicitly). But the timestamp as parsed out into the _time field will be stored as an "absolute" timestamp and will be shown in the UI using your user's defined timezone. So the same event will be shown at 14:39 if your user uses UTC or 16:39 if he uses CEST and so on. But the event's contents will remain the same.
One caveat - there are occasionally situations (especially with newly introduced features) that the .spec file does not contain proper entry. It doesn't happen often but it does happen sometimes.
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
The time zone is included in the timestamp.  Tell Splunk about it and it will automatically convert the timestamp.   [cloudflare:json] disabled = false TIME_PREFIX = "EdgeStartTimestamp":" TIME_FOR... See more...
The time zone is included in the timestamp.  Tell Splunk about it and it will automatically convert the timestamp.   [cloudflare:json] disabled = false TIME_PREFIX = "EdgeStartTimestamp":" TIME_FORMAT = %Y-%m-%dT%H:%M:%S%Z MAX_TIMESTAMP_LOOKAHEAD = 20  
You are of course welcome to post a feedback to the docs page. I do that fairly often if I find something explained not clearly enough or in not enough detail. There's a little feedback form on th... See more...
You are of course welcome to post a feedback to the docs page. I do that fairly often if I find something explained not clearly enough or in not enough detail. There's a little feedback form on the bottom of each docs page.
Can someone pls help me with it  @gcusello 
Reasonable enough, and yes I do get incorrect linebreaker. So in this case the "fix" is to install the TA on the HF layer. Maybe it's just me bur this could be indicated with a bit more "urgency" in... See more...
Reasonable enough, and yes I do get incorrect linebreaker. So in this case the "fix" is to install the TA on the HF layer. Maybe it's just me bur this could be indicated with a bit more "urgency" in the documentation? The first table indicates "This add-on supports forwarders of any type for data collection. The host must run a supported version of *nix." which is not really the same as "needed for correct parsing of logs". The "Distributed deployment feature compatibility" does not even list HF, so while it is logical it is not really intuitive based on the documentation Thanks and all the best