All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Something like this  index=B | stats count by Reporting_Host | search NOT [| inputlookup inventory.csv ] | table Hostname ] inventory.csv has the table pickup from index A.  Lookup query is - ind... See more...
Something like this  index=B | stats count by Reporting_Host | search NOT [| inputlookup inventory.csv ] | table Hostname ] inventory.csv has the table pickup from index A.  Lookup query is - index=B | stats values(IP address) by Hostname Operating_system PS This query is not working, just a thought.
| spath c7n:MatchedFilters{} output=MatchedFilters | foreach mode=multivalue MatchedFilters [| eval MatchedFilters=trim(MatchedFilters,"tag:")] | chart count over _time by MatchedFilters useother=f
  In Splunk, hot buckets are where incoming data is actively written and indexed. These buckets hold the most recent data and are immediately searchable. Once a hot bucket reaches its ... See more...
  In Splunk, hot buckets are where incoming data is actively written and indexed. These buckets hold the most recent data and are immediately searchable. Once a hot bucket reaches its size or time limit, it transitions into a warm bucket. Warm buckets store data that is no longer being written to but remains searchable. ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
This is the query I have figured out from awesome Splunk community    index=my-index "kubernetes.namespace_name"="namus" "cluster_id":"*stage*" "Env":"stg" "loggerName":"com.x.x.x.SomeClass" "M... See more...
This is the query I have figured out from awesome Splunk community    index=my-index "kubernetes.namespace_name"="namus" "cluster_id":"*stage*" "Env":"stg" "loggerName":"com.x.x.x.SomeClass" "My simple query for key=" "log.level"=INFO | spath output=x log.message | rex max_match=0 field=x "(?<key>\w+)=(?<value>\w+)" | eval z=mvzip(key, value, "~") | mvexpand z | rex field=z "(?<key>[^~]+)~(?<value>.*)" | table key value | eval dummy="" | xyseries dummy key value | fields - dummy   Which results in this output. I am missing lot of data. Can someone show how to list all the rows found. What is that I am missing here? 
Hi @yuanliu  Unfortunately none of the below queries are working for me.  First one is crashing splunk so unable to test it.  Second one, I don't get any results.  Could be because the field "Reporti... See more...
Hi @yuanliu  Unfortunately none of the below queries are working for me.  First one is crashing splunk so unable to test it.  Second one, I don't get any results.  Could be because the field "Reporting_Host" is present only in index B and since we are excluding index B in the next step, the results are 0.  However I tried renaming the Hostname field in index A and try running the query but no results.   Can we test this scenario using a look up table, that might improve the search performance.  Can you give me something in this regards?
Has been officially registered as a bug.  No ETA on fix
Yes... I need to take the load values available  to create a metric so I can compare week over week values on a timeline.  I do not like the option that is available to do this in Dash Studio.  I don... See more...
Yes... I need to take the load values available  to create a metric so I can compare week over week values on a timeline.  I do not like the option that is available to do this in Dash Studio.  I don't think it is particularly accurate, and I can't change the colors, etc. I also need to be able to compare the week over week values so I can create a metric expression to use in an alert, which I can't do in dash studio. Now that I"m thinking about it... I may be able to filter on the particular customer in a query if I have the  their id in a parameter I'm collecting from an http data collector.  I would prefer to not have to go down that road, but it may be the only option.
"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class... See more...
"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class", "tag:rk_job_id", "tag:rk_managed", "tag:rk_object", "tag:rk_restore_source_region", "tag:rk_restore_timestamp", "tag:rk_source_snapshot_native_id", "tag:rk_source_vm_native_id", "tag:rk_source_vm_native_name", "tag:rk_taskchain_id", "tag:rk_user", "tag:rk_version" ]
Hi, can you not just use the default widgets and select the Service Endpoints which has load, response times and errors which you can add to a dashboard? Is there a reason you want to use analytics?
Hi @tlmayes,  i have a same issue. Any solutions for this? Pls
Forgot to mention: would this be done via an update to the job status (while it's running)? If so please share any example code.. thanks
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to sele... See more...
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to select multiple indexes from menu I've got a huge problem. My multiselect search index is: index="myindex" sourcetype="pinginfo" source="C:\\a\\b\\c\\d\\e\\f f\\g\\h\\ı-i-j\\porty*" |table source |dedup source   but when I pass  this token to reports as: $multi_token$ | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source    it creates a search string with only single backslashes but double back slashes.. source="C:\a\b\c\d\e\f f\e\g\ı-i-j\porty102" | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source   I've tried so many things but couldn't be able to solve it.  Important Note: In multiselect dropdown menu  elements are shown with their whole source adrees such as: C:\a\b\c\d\e\f f\d\e\ı-i-j\porty102 Couldn't be able to show this also. I can't change anything about splunk universal forwarders settings or the source adress because restrictions are so strict in the company. Regards
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions i... See more...
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions it's messy.  I'd like to be able to catch the exception and format it correctly, or better still just write out the warning message for it to be picked up by the GUI. It looks like I should create some custom [messages.conf] stanza and include that Name and formatting in the message. [PYTHONSCRIPT:RESULTS_S_S] message = Expecting %s results, server provided %s The current logging is going to search.log (CHUNKEDEXTPROCESSORVIASTDERRLOGGER) but not reaching info.csv (infoPath). Thanks in advance    
The overlay field has to be a field from the search, so you will have to combine the daily count and the moving average into a single data source.
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvi... See more...
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvindex(split(_raw,"|"),0) | table _raw | extract | stats count by sourcetype | eval hasBeenSearched=1 | append [| metadata index=* type="sourcetypes" | eval hasBeenSearched="0"] | chart sum(kb) by series | sort - sum(kb) | search hasBeenSearched="0" | search NOT[inputlookup sourcetypes_1.csv | fields sourcetype] I would want to modify this query such that it also enlists the volume  ingestion  of these sourcetypes as well...Kindly suggest
Thanks @gcusello    Here are the high level steps coming to my mind. 1. Take the backup of splunk mount point. 2. stop first physical node. 3. Format the existing OS and configure the new one. ... See more...
Thanks @gcusello    Here are the high level steps coming to my mind. 1. Take the backup of splunk mount point. 2. stop first physical node. 3. Format the existing OS and configure the new one. 4.  Restore the backup taken.   Let me know if I am missing something.
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to o... See more...
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to overlay the moving average on top of the barchart. Have done this in classic using overlays, but in studio dont know how to reference the chained datasource results in the first panel. For example my bar chart visualization code looks like this. In overlay fields i tried to explicitly reference the data source name but doesnt seem to work. i know both queries/data sources are working as my base search works and my chained search works when show in separate panels. { "type": "splunk.column", "dataSources": { "primary": "ds_C2wKdHsA" }, "title": "Per Day Count", "options": { "y": "> primary | frameBySeriesNames('NULL','_span','_spandays')", "legendTruncation": "ellipsisOff", "legendDisplay": "off", "xAxisTitleVisibility": "hide", "xAxisLabelRotation": -45, "yAxisTitleVisibility": "hide", "overlayFields": "$chaineddatasource_ByDayMA:result.gpsreHaltedJobsMA$", "axisY2.enabled": true, "dataValuesDisplay": "all" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }
Hi @JandrevdM , Splunk has the join command but I don't hint it because it's very slow and requires many resources. if you have less than 50,000 results in the second search, you could use this sol... See more...
Hi @JandrevdM , Splunk has the join command but I don't hint it because it's very slow and requires many resources. if you have less than 50,000 results in the second search, you could use this solution joining events using stats command: index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" AS subscription | dedup object | where command!="MICROSOFT.COMPUTE/VIRTUALMACHINES/DELETE" | table change_type object resource_group subscription command _time | sort object asc | append [ search index=* sourcetype=o365:management:activity | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceName" as ResourceName "PropertyBag{}.AssessmentStatusPerInitiative{}.CloudProvider" as CloudProvider "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceType" as ResourceTypes "PropertyBag{}.AssessmentStatusPerInitiative{}.EventType" as EventType | where ResourceTypes="Microsoft.Compute/virtualMachines" OR ResourceTypes="microsoft.compute/virtualmachines" | eval object=mvdedup(split(ResourceName," ")), Provider=mvdedup(split(CloudProvider," ")), Type=mvdedup(split(ResourceTypes," ")) | dedup object | where EventType!="Microsoft.Security/assessments/Delete" | table object, Provider, Type * | sort object asc ] | stats values(*) AS * BY object eventually limiting the fields to display related to your requirements. Ciao. Giuseppe
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more da... See more...
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more data.  Example index one will display: host 1 10.0.0.2 host 2 10.0.0.3 And index two will display:  host 3 10.0.0.4 host 1 10.0.0.2 What I want is: host 1 10.0.0.2 host 2 10.0.0.3 host 3 10.0.0.4 index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | dedup object | where command!="MICROSOFT.COMPUTE/VIRTUALMACHINES/DELETE" | table change_type object resource_group subscription command _time | sort object asc index=* sourcetype=o365:management:activity | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceName" as ResourceName | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.CloudProvider" as CloudProvider | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceType" as ResourceTypes | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.EventType" as EventType | where ResourceTypes="Microsoft.Compute/virtualMachines" OR ResourceTypes="microsoft.compute/virtualmachines" | eval object=mvdedup(split(ResourceName," ")) | eval Provider=mvdedup(split(CloudProvider," ")) | eval Type=mvdedup(split(ResourceTypes," ")) | dedup object | where EventType!="Microsoft.Security/assessments/Delete" | table object, Provider, Type * | sort object asc