All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class... See more...
"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class", "tag:rk_job_id", "tag:rk_managed", "tag:rk_object", "tag:rk_restore_source_region", "tag:rk_restore_timestamp", "tag:rk_source_snapshot_native_id", "tag:rk_source_vm_native_id", "tag:rk_source_vm_native_name", "tag:rk_taskchain_id", "tag:rk_user", "tag:rk_version" ]
Hi, can you not just use the default widgets and select the Service Endpoints which has load, response times and errors which you can add to a dashboard? Is there a reason you want to use analytics?
Hi @tlmayes,  i have a same issue. Any solutions for this? Pls
Forgot to mention: would this be done via an update to the job status (while it's running)? If so please share any example code.. thanks
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to sele... See more...
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to select multiple indexes from menu I've got a huge problem. My multiselect search index is: index="myindex" sourcetype="pinginfo" source="C:\\a\\b\\c\\d\\e\\f f\\g\\h\\ı-i-j\\porty*" |table source |dedup source   but when I pass  this token to reports as: $multi_token$ | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source    it creates a search string with only single backslashes but double back slashes.. source="C:\a\b\c\d\e\f f\e\g\ı-i-j\porty102" | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source   I've tried so many things but couldn't be able to solve it.  Important Note: In multiselect dropdown menu  elements are shown with their whole source adrees such as: C:\a\b\c\d\e\f f\d\e\ı-i-j\porty102 Couldn't be able to show this also. I can't change anything about splunk universal forwarders settings or the source adress because restrictions are so strict in the company. Regards
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions i... See more...
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions it's messy.  I'd like to be able to catch the exception and format it correctly, or better still just write out the warning message for it to be picked up by the GUI. It looks like I should create some custom [messages.conf] stanza and include that Name and formatting in the message. [PYTHONSCRIPT:RESULTS_S_S] message = Expecting %s results, server provided %s The current logging is going to search.log (CHUNKEDEXTPROCESSORVIASTDERRLOGGER) but not reaching info.csv (infoPath). Thanks in advance    
The overlay field has to be a field from the search, so you will have to combine the daily count and the moving average into a single data source.
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvi... See more...
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvindex(split(_raw,"|"),0) | table _raw | extract | stats count by sourcetype | eval hasBeenSearched=1 | append [| metadata index=* type="sourcetypes" | eval hasBeenSearched="0"] | chart sum(kb) by series | sort - sum(kb) | search hasBeenSearched="0" | search NOT[inputlookup sourcetypes_1.csv | fields sourcetype] I would want to modify this query such that it also enlists the volume  ingestion  of these sourcetypes as well...Kindly suggest
Thanks @gcusello    Here are the high level steps coming to my mind. 1. Take the backup of splunk mount point. 2. stop first physical node. 3. Format the existing OS and configure the new one. ... See more...
Thanks @gcusello    Here are the high level steps coming to my mind. 1. Take the backup of splunk mount point. 2. stop first physical node. 3. Format the existing OS and configure the new one. 4.  Restore the backup taken.   Let me know if I am missing something.
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to o... See more...
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to overlay the moving average on top of the barchart. Have done this in classic using overlays, but in studio dont know how to reference the chained datasource results in the first panel. For example my bar chart visualization code looks like this. In overlay fields i tried to explicitly reference the data source name but doesnt seem to work. i know both queries/data sources are working as my base search works and my chained search works when show in separate panels. { "type": "splunk.column", "dataSources": { "primary": "ds_C2wKdHsA" }, "title": "Per Day Count", "options": { "y": "> primary | frameBySeriesNames('NULL','_span','_spandays')", "legendTruncation": "ellipsisOff", "legendDisplay": "off", "xAxisTitleVisibility": "hide", "xAxisLabelRotation": -45, "yAxisTitleVisibility": "hide", "overlayFields": "$chaineddatasource_ByDayMA:result.gpsreHaltedJobsMA$", "axisY2.enabled": true, "dataValuesDisplay": "all" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }
Hi @JandrevdM , Splunk has the join command but I don't hint it because it's very slow and requires many resources. if you have less than 50,000 results in the second search, you could use this sol... See more...
Hi @JandrevdM , Splunk has the join command but I don't hint it because it's very slow and requires many resources. if you have less than 50,000 results in the second search, you could use this solution joining events using stats command: index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" AS subscription | dedup object | where command!="MICROSOFT.COMPUTE/VIRTUALMACHINES/DELETE" | table change_type object resource_group subscription command _time | sort object asc | append [ search index=* sourcetype=o365:management:activity | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceName" as ResourceName "PropertyBag{}.AssessmentStatusPerInitiative{}.CloudProvider" as CloudProvider "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceType" as ResourceTypes "PropertyBag{}.AssessmentStatusPerInitiative{}.EventType" as EventType | where ResourceTypes="Microsoft.Compute/virtualMachines" OR ResourceTypes="microsoft.compute/virtualmachines" | eval object=mvdedup(split(ResourceName," ")), Provider=mvdedup(split(CloudProvider," ")), Type=mvdedup(split(ResourceTypes," ")) | dedup object | where EventType!="Microsoft.Security/assessments/Delete" | table object, Provider, Type * | sort object asc ] | stats values(*) AS * BY object eventually limiting the fields to display related to your requirements. Ciao. Giuseppe
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more da... See more...
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more data.  Example index one will display: host 1 10.0.0.2 host 2 10.0.0.3 And index two will display:  host 3 10.0.0.4 host 1 10.0.0.2 What I want is: host 1 10.0.0.2 host 2 10.0.0.3 host 3 10.0.0.4 index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | dedup object | where command!="MICROSOFT.COMPUTE/VIRTUALMACHINES/DELETE" | table change_type object resource_group subscription command _time | sort object asc index=* sourcetype=o365:management:activity | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceName" as ResourceName | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.CloudProvider" as CloudProvider | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceType" as ResourceTypes | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.EventType" as EventType | where ResourceTypes="Microsoft.Compute/virtualMachines" OR ResourceTypes="microsoft.compute/virtualmachines" | eval object=mvdedup(split(ResourceName," ")) | eval Provider=mvdedup(split(CloudProvider," ")) | eval Type=mvdedup(split(ResourceTypes," ")) | dedup object | where EventType!="Microsoft.Security/assessments/Delete" | table object, Provider, Type * | sort object asc
Hi @JandrevdM , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @sbhatnagar88 , I don't see any issue in this activity, plan it and migrate one server at a time. Ciao. Giuseppe
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and alread... See more...
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and already running on Redhat. so we will not be touching CM. What should be the approach here and how should we plan this activity ? Any high level steps would be highly recommended.
hot buckets are still being written to, warm buckets are not. Both are usually on fast (expensive) storage.
Thanks! I initially got it right and then tried to think to deep into it. Forgot that if you dedup that splunk will take the latest event.
Answered my own question - the script below worked for me: require([ 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(mvc) { console.log('JavaScript loaded'); // Debug log to confirm scr... See more...
Answered my own question - the script below worked for me: require([ 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(mvc) { console.log('JavaScript loaded'); // Debug log to confirm script is running var defaultTokenModel = mvc.Components.getInstance('default'); var submittedTokens = mvc.Components.getInstance('submitted'); document.getElementById('submit_button').addEventListener('click', function() { var comment = document.getElementById('html_ta_user_comment').value; console.log('Submit button clicked'); // Debug log to confirm button click console.log('Comment:', comment); // Debug log to show the comment value defaultTokenModel.set('tokComment', comment); console.log('Token set:', defaultTokenModel.get('tokComment')); // Debug log to confirm token is set // Trigger the search by updating the submitted tokens submittedTokens.set(defaultTokenModel.toJSON()); }); }); By adding the line submittedTokens.set(defaultTokenModel.toJSON());, we ensured that the search is refreshed whenever the token value changes.
Thanks @isoutamo. This is very insightful.