All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Tank You @gcusello  We (Unionbank) are the customer. It seems it is a contractual matter. But I would like Splunk support to put some message about this after logon, instead of not being able ... See more...
Tank You @gcusello  We (Unionbank) are the customer. It seems it is a contractual matter. But I would like Splunk support to put some message about this after logon, instead of not being able to select a field. It is confusing. regards Altin
If i use the  | addtotals col=t row=f labelfield=Index label="Overall Total" , I am getting incorrect total result , becuase one index and multiple sourcetype values are there. 
I presume atn_common_lookup_topology-technical-detail_001.csv has fields "key", "type" and "system"? Do you have proper access to the lookup file | inputlookup atn_common_lookup_topology-technical-... See more...
I presume atn_common_lookup_topology-technical-detail_001.csv has fields "key", "type" and "system"? Do you have proper access to the lookup file | inputlookup atn_common_lookup_topology-technical-detail_001.csv
You can use addtotals as below -  | addtotals col=t row=f labelfield=index label="Overall Total"   Please accept the solution and hit Karma, if this helps!
I need to add the total GB.  Please let me know how to add the over all total.  Index                Source-Type              GB aws_vpcflow   - aws:vpcflow    26192.00305                         ... See more...
I need to add the total GB.  Please let me know how to add the over all total.  Index                Source-Type              GB aws_vpcflow   - aws:vpcflow    26192.00305                             -   aws:cloudwatchlogs:vpcflow 32.695269 windows     -     windows:fluentd     19939.02727                          -     windows                9713.832884                          -     WinEventLog:Security   8.928759
@ITWhisperer  Now tried this but still no luck index=atn*infra*tier3* | bin span=6m@m metric_value as 6_min_data | stats count(eval(metric_value=0)) as uptime count(eval(metric_value=1)) as down... See more...
@ITWhisperer  Now tried this but still no luck index=atn*infra*tier3* | bin span=6m@m metric_value as 6_min_data | stats count(eval(metric_value=0)) as uptime count(eval(metric_value=1)) as downtime by 6_min_data, source_host | eval total_uptime = uptime*360 | eval total_dowtime = downtime*360 | eval total_uptime = if(isnull(total_uptime),0,total_uptime) | eval total_downtime = if(isnull(total_dowtime),0, total_dowtime) | eval avg_uptime_perc = round((total_uptime/(total_uptime+total_downtime))*100 ,2) | eval avg_downtim_perc = round((total_downtime/(total_uptime+total_downtime))*100,2) | eval total_uptime = tostring(total_uptime, "duration") | eval total_downtime = tostring(total_downtime, "duration") | lookup atn_common_lookup_topology-technical-detail_001.csv key as source_host | rename "total_uptime" as "Total Uptime", "total_downtime" as "Total Downtime", avg_uptime_perc as "Average uptime in %", avg_downtim_perc as "Average Downtime in %" source_host as "Source Host" | table "type" system "Source Host" "Total Uptime" "Total Downtime" "Average uptime in %" "Average Downtime in %"
Hi @altink , you must be enabled by the customer to open a case for them. Otherwise your customer must open the case by itself. To be enabled, the customer must send a request to Splunk Support or... See more...
Hi @altink , you must be enabled by the customer to open a case for them. Otherwise your customer must open the case by itself. To be enabled, the customer must send a request to Splunk Support or to your reference Splunk Sales Engineer. Ciao. Giuseppe
Hi ,   I have the logs written in the below manner 26/08/2024 10:27 method=are status=failed run_id_123 26/08/2024 10:28 method=are status=failed run_id_123 26/08/2024 10:29 method=are status=fa... See more...
Hi ,   I have the logs written in the below manner 26/08/2024 10:27 method=are status=failed run_id_123 26/08/2024 10:28 method=are status=failed run_id_123 26/08/2024 10:29 method=are status=failed run_id_123 26/08/2024 10:30 method=are status=completed run_id_123 failure_reason1 failure_reason_2 failure_reason_3 failure_reason_4     m trying to check the latest retry is completed or failed, if faile print the failure reason on the next 5 lines.   please help
Try doing the lookup before you rename the field you are using for the lookup! Also, does your lookup file really start with a "*", if so, try renaming it to something without a wildcard in it.
Thank you for all the inputs. Here is the final query index=Github_Webhook source="http:github-dev-token" eventtype="GitHub::Push" sourcetype="json_ae_git-webhook" | rename repository.name as RepoN... See more...
Thank you for all the inputs. Here is the final query index=Github_Webhook source="http:github-dev-token" eventtype="GitHub::Push" sourcetype="json_ae_git-webhook" | rename repository.name as RepoName | spath path=commits{} output=commitscollection | mvexpand commitscollection | fields _time RepoName commitscollection | spath input=commitscollection | table RepoName id added{} modified{} removed{} author.name author.email message | spath path=commits{} output=commitscollection --> Thanks to all the responders. This helps in getting the commits from array Next challenge is, if you pull the data for all the other fields in the same approach, each of those values cannot be mapped with each other. To address this, we should use mvexpand to split them into separate array events Once the array is split into separate events, now, we will use the same logic to split the data into events. Hope this helps.
@ITWhisperer  This is my final query but still fields are not coming from look up file . Source_host and key field in look up file is same . index=*infra* metric_label ="Host : Reporting no data" |... See more...
@ITWhisperer  This is my final query but still fields are not coming from look up file . Source_host and key field in look up file is same . index=*infra* metric_label ="Host : Reporting no data" | bin span=6m@m metric_value as 6_min_data | stats count(eval(metric_value=0)) as uptime count(eval(metric_value=1)) as downtime by 6_min_data, source_host | eval total_uptime = uptime*360 | eval total_dowtime = downtime*360 | eval total_uptime = if(isnull(total_uptime),0,total_uptime) | eval total_downtime = if(isnull(total_dowtime),0, total_dowtime) | eval avg_uptime_perc = round((total_uptime/(total_uptime+total_downtime))*100 ,2) | eval avg_downtim_perc = round((total_downtime/(total_uptime+total_downtime))*100,2) | eval total_uptime = tostring(total_uptime, "duration") | eval total_downtime = tostring(total_downtime, "duration") | rename "total_uptime" as "Total Uptime", "total_downtime" as "Total Downtime", avg_uptime_perc as "Average uptime in %", avg_downtim_perc as "Average Downtime in %" source_host as "Source Host" | lookup *_common_lookup_topology-technical-detail_001.csv key as source_host | table key type system "Source Host" "Total Uptime" "Total Downtime" "Average uptime in %" "Average Downtime in %"
Hi I am trying to create a case at your official support. Our user is "unionub". Required Input field "Select Entitlement" - a combo box - is always empty, and I cannot select anything. So I ... See more...
Hi I am trying to create a case at your official support. Our user is "unionub". Required Input field "Select Entitlement" - a combo box - is always empty, and I cannot select anything. So I cannot go on to create the case. Tried with both Firefox and Chrome please advise Altin Karaulli Security Officer Unionbank
Hi @yuanliu  Thank you again for your suggestion Below I posted my sample search closer to the real search, where I have multiple subnets in "search filter" and additional field filter. When I... See more...
Hi @yuanliu  Thank you again for your suggestion Below I posted my sample search closer to the real search, where I have multiple subnets in "search filter" and additional field filter. When I removed the "search ip filter" and moved it up next to index=risk,  the search is slower 3 seconds, but the results are the same. 1) What is the difference between using "| search ip=" and "ip="?   They give the same outcome 2) Sorry about not mentioning dedup. Because dedup will remove any rows that have empty/null fields, so I put the dedup after join and adding "fillnull" command If I move it to each subsearch, I would need to add fillnull command for each subsearch and it's probably adding a delay.  What do you think? I appreciate your suggestion again. Thanks Before  removing "filter ip"       | inputlookup host.csv | search (ip="10.1.0.0/16" OR ip="10.2.0.0/16" OR ip="10.3.0.0/16" OR ip="10.4.0.0/16" OR ip="10.5.0.0/16" OR ip="10.6.0.0/16") | rename ip_address as ip | join max=0 type=left ip [ search index=risk | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) | search (ip="10.1.0.0/16" OR ip="10.2.0.0/16") AND (company="compA" OR company="compB") ] | join max=0 type=left ip [ search index=risk ip="10.2.0.0/16" | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) | search (ip="10.3.0.0/16" OR ip="10.4.0.0/16") AND (company="compA" OR company="compB") ] | join max=0 type=left ip [ search index=risk ip="10.3.0.0/16" | fields ip risk score contact | search (ip="10.5.0.0/16" OR ip="10.6.0.0/16") AND (company="compA" OR company="compB") ] | fillnull value0 score | fillnull value="N/A" ip risk contact | dedup ip risk score contact | table ip, host, risk, score, contact         After  removing "filter ip"  (3 seconds slower)         | inputlookup host.csv | search (ip="10.1.0.0/16" OR ip="10.2.0.0/16" OR ip="10.3.0.0/16" OR ip="10.4.0.0/16" OR ip="10.5.0.0/16" OR ip="10.6.0.0/16") | rename ip_address as ip | join max=0 type=left ip [ search index=risk (ip="10.1.0.0/16" OR ip="10.2.0.0/16") AND (company="compA" OR company="compB") | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) ] | join max=0 type=left ip [ search index=risk ip="10.2.0.0/16" (ip="10.3.0.0/16" OR ip="10.4.0.0/16") AND (company="compA" OR company="compB") | fields ip risk score contact | where isnotnull(ip) AND isnotnull(risk) AND isnotnull(score) ] | join max=0 type=left ip [ search index=risk ip="10.3.0.0/16" (ip="10.5.0.0/16" OR ip="10.6.0.0/16") AND (company="compA" OR company="compB") | fields ip risk score contact ] | fillnull value0 score | fillnull value="N/A" ip risk contact | dedup ip risk score contact | table ip, host, risk, score, contact          
Default System Timezone is selected in my preferences. I don't think is the problem because my other searches are working fine.
Splunk appears to be interpreting the event timestamp correctly.  The displayed time is based on your selected time zone.  What do you have select in your preferences?
I did the same, but nothing changed TIME_FORMAT = %Y-%m-%dT%H:%M:%S%Z I ran the above search at 5:35 PM and the latest event time is: 12:30, so approx. 5 hours behind
Thank you @PickleRick 
After creating a new LDAP strategy and entering all required information I get an error when saving. Entry not saved, the following error was reported: Syntax Error: Unexpected token< in JSON at pos... See more...
After creating a new LDAP strategy and entering all required information I get an error when saving. Entry not saved, the following error was reported: Syntax Error: Unexpected token< in JSON at position 5    I have verified all entries are correct multiple times.
Hi @zksvc , version 4.38.0 is compatible with Splunk 9.1.x, 9.2.x and 9.3.x versions. Anyway, this app mainly gives you new Use Cases and eventual correct some old use cases, it's described in the ... See more...
Hi @zksvc , version 4.38.0 is compatible with Splunk 9.1.x, 9.2.x and 9.3.x versions. Anyway, this app mainly gives you new Use Cases and eventual correct some old use cases, it's described in the documentation. ciao. Giuseppe
Hi PickleRick, Yes, I'm very aware that the structure I'm working with is really bad  . But let's say if I do pre-process on the data (currently data input is a modular input), how am I able to do ... See more...
Hi PickleRick, Yes, I'm very aware that the structure I'm working with is really bad  . But let's say if I do pre-process on the data (currently data input is a modular input), how am I able to do the same on previous data that's already been indexed? Furthermore, the data is continuous and I'm only able to retrieve up to max 10 days behind (I can't change this unfortunately).  So, if I adjust the pre-processing and make the data structure into something that makes sense, this will only take effect for all future data. If I wanted to display the same table, the SPL won't work with older data in the same index.