All Topics

Top

All Topics

Hi Guys, I'm trying to figure out what are the prerequisite to validate the splunk like  Running Service Name /  Application Name in Control Panel / and Registry path.
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from wor... See more...
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from workspace.
Hi there,  I am trying to make a statistic graph in my dashboard using the search below.   | mstats rate(vault.runtime.total_gc_pause_ns.value) as gc_pause WHERE `vault_telemetry` AND cluster=* AND... See more...
Hi there,  I am trying to make a statistic graph in my dashboard using the search below.   | mstats rate(vault.runtime.total_gc_pause_ns.value) as gc_pause WHERE `vault_telemetry` AND cluster=* AND (host=*) BY host span=5m | timechart max(gc_pause) AS iowait bins=1000 BY host | eval warning=3.3e7, critical=8.3e7 **Note that the search below comes from the pre-defined dashboard template but it is not working as is in my environment.  In my Splunk, when I do a mpreview of my index `vault_telemetry` I am getting a result like the below: metric_name:vault.hostname1.runtime.total_gc_pause_ns metric_name:vault.hostname2.runtime.total_gc_pause_ns metric_name:vault.hostname3.runtime.total_gc_pause_ns metric_name:vault.hostname3.runtime.total_gc_pause_ns metric_name:vault.hostname4.runtime.total_gc_pause_ns If I modify the pre-defined search from the template using the below I can get the result however, I can only do it on one hostname.  | mstats rate(vault.hostname1.runtime.total_gc_pause_ns) as gc_pause WHERE `vault_telemetry` span=5m | timechart max(gc_pause) AS iowait bins=1000 | eval warning=3.3e7, critical=8.3e7   I would like to have all the hostname shows on my single panel. Can someone please able to assist and help me with the correct search index I need to use?
Hi Team, How to integrate proficio with splunk
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum versio... See more...
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum version required is 8.2.12, I'm not sure how big is the risk in upgrading process as we need to be sure the information in indexers is going to be safe and splunk must be operational. i have read some of the upgrading documentation to version 9.0.6 but as is said i am not sure the best option with the minimum risk. Do you have any advice? Thank you!
Hello Splunkers, Can someone help me with a query to detect multiple http errors from single IP , basically when the status code is in 400s/500s. Thank you, regards, Moh
I am trying to parse some data for API latency. I have a value for "elapsedTime" which spits that out. However if a request takes longer than 999ms, then it changes to reporting in seconds. So... the... See more...
I am trying to parse some data for API latency. I have a value for "elapsedTime" which spits that out. However if a request takes longer than 999ms, then it changes to reporting in seconds. So... the query below could return 999ms or 1.001s. What eval statement do I need here to parse the value of elapsedTime and if it contains "s" and not "ms",  then * 1000 to get a value in ms... | NEED SOME EVAL HERE I GUESS | stats min(elapsedTime) as Fastest max(elapsedTime) as Slowest avg(elapsedTime) as "Average"
I am looking to audit any user that uploads to splunk through the User interface or REST API  After doing some investigation I have found that the endpoints /services/app/local is the REST API endp... See more...
I am looking to audit any user that uploads to splunk through the User interface or REST API  After doing some investigation I have found that the endpoints /services/app/local is the REST API endpoint that can be used to post an application. I was wondering whether splunk internally posts to that API when you utilise the GUI so by auditing that log you can get both use cases.  I have crafted the below search to isolate these events and confirmation that this works would be awesome! index=_internal sourcetype=splunkd_access /services/apps/local method=POST Appreciate all assistance. 
This is a bit of a long shot, but I was curious to get the community's input. Today, I realized that both Slack and PasteBin use "codemirror" to handle their web code editor / syntax highlighting. ... See more...
This is a bit of a long shot, but I was curious to get the community's input. Today, I realized that both Slack and PasteBin use "codemirror" to handle their web code editor / syntax highlighting. With PasteBin, I had to examine the page source to confirm. With Slack, you can confirm it here: https://slack.com/libs/webapp I figured I would submit a feature request to codemirror to see if they could add a "language mode" for Splunk. However, my issue was immediately closed with the response that codemirror does not implement new language modes, and it would be better implemented via a separate package. So I guess someone will have to create and maintain a language mode for codemirror for others to use if they want SPL support. Unfortunately, I do not have the experience to do this. But looking around, it appears GraphQL built their own codemirror language code package...so I was thinking...even if it's a long shot, maybe I can send the idea to Splunk and see what happens. Where would be the appropriate place to send this suggestion in to Splunk to see if that's something they'd be interested in implementing? I tried checking the ideas submission, but there are no categories where this idea would fit. I think it would be awesome if one day we could have Splunk syntax highlighting support in Slack (and also PasteBin, but there's a lot less people using that lol).
Hello,  I have three search query below that I want to combine the three metric name sum into one total count. Can someone able to assist how I can write my query? First Query: | mstats sum(vault.... See more...
Hello,  I have three search query below that I want to combine the three metric name sum into one total count. Can someone able to assist how I can write my query? First Query: | mstats sum(vault.token.creation.nonprod) as count where index=vault_metrics span=1h | timechart sum(count) as count span=1h | fillnull value=0 | eventstats perc90(count) perc50(count) Second Query: | mstats sum(vault.token.creation.dev) as count where index=vault_metrics span=1h | timechart sum(count) as count span=1h | fillnull value=0 | eventstats perc90(count) perc50(count) Third Query: | mstats sum(vault.token.creation.nonprod_preprod) as count where index=vault_metrics span=1h | timechart sum(count) as count span=1h | fillnull value=0 | eventstats perc90(count) perc50(count)
Here is my kv store lookup  name rating comment experience  subject A 3 good 4 math B 4 very good 7 science   now i want to append new row like t... See more...
Here is my kv store lookup  name rating comment experience  subject A 3 good 4 math B 4 very good 7 science   now i want to append new row like this with different rating name rating comment experience  subject A 3 good 4 math B 4 very good 7 science A 5 Excellent  4 math   i am trying to use      | inputlookup table_a |search name="A" |eval rating=5 ,comment="Execellent" key=_key| outputlookup append=true key_field=key table_a     But this is not working..Please someone help me with this..   Thanks
Hi, Is there anyway to get the p(95) of URI1, URI2, URI3 if the p(95) of URI4 is greater than 2sec. I tried the below query, but it’s giving the p(95) on only those URIs whose p(95)>2. I’m expecti... See more...
Hi, Is there anyway to get the p(95) of URI1, URI2, URI3 if the p(95) of URI4 is greater than 2sec. I tried the below query, but it’s giving the p(95) on only those URIs whose p(95)>2. I’m expecting p(95) of all URI1, URI2, URI3 if the condition is sarisfied index=myindex URI in (URI1, URI2, URI3, URI4) | stats perc95(responsetime) as p95 by URI | where p95>2
Hello, How to outputlookup csv with permission?   ***Note that I am not Splunk admin - I only have access to Splunk GUI***   Please help. Thank you so much For example:  | outputlookup test.cs... See more...
Hello, How to outputlookup csv with permission?   ***Note that I am not Splunk admin - I only have access to Splunk GUI***   Please help. Thank you so much For example:  | outputlookup test.csv It will create test.csv in the following directory with no owner and sharing:Global  I am able to delete it, but I could not modify the permission. How to outputlookup csv and set to sharing:App and I am the owner? /opt/splunk/etc/apps/testapp/lookups/test.csv    Owner: No owner    App: testapp   Sharing: Global    Status Enabled
I need to find abnormalities in my data. The data I have is individual views for certain movie titles. I need to find content that was abnormally popular over some small time interval, say 1 hour. An... See more...
I need to find abnormalities in my data. The data I have is individual views for certain movie titles. I need to find content that was abnormally popular over some small time interval, say 1 hour. And check a few weeks worth of data. One option is to run a query manually for each hour       ``` Run this over 60m time window ``` index=mydata | top limit=100 movieId       Obviously I don't want to run this query 24 * 7 = 168 times for one weeks worth of data. How can I bin the data into time buckets, and get a percentage ratio by movieId? This is what I came up with:       ``` Run this over 1+ week ``` index=mydata | bin span=60m _time | top limit=100 movieId, _time       This does not help me because the output of `top` is showing me a percentage based on the entire input set of data. I need a "local" percentage, i.e. a percentage based on only that slice of data in the bin.   I'm wondering if eventstats or streamstats can be useful here but I was not able to come up with a query using those commands
Bi-directional Integration of Cisco AppDynamics and Cisco ThousandEyes gives you full visibility into every facet of your customers’ digital experience  What if you could see everything that impact... See more...
Bi-directional Integration of Cisco AppDynamics and Cisco ThousandEyes gives you full visibility into every facet of your customers’ digital experience  What if you could see everything that impacts your digital supply chain — from the code to the infrastructure to the network and everything in between? With Cisco AppDynamics plus ThousandEyes - you can. Learn more about it here.   Additional Resources Read about the ThousandEyes integration with AppDynamics in Documentation. About Courtney Dragoon Courtney DragoonCourtney Dragoon is a Product Marketing Manager at Cisco AppDynamics, with over a decade of experience in high technology marketing and sales at companies like Dell EMC, GoTo and CoderPad. She is a customer-centric tech marketing leader with expertise in product marketing, strategic messaging and competitive differentiation. With a proactive attitude and a passion for driving innovation, she has successfully led cross-functional teams and spearheaded complex product launches. Courtney’s passion for strong partnerships and impactful content creation has contributed to her success in the industry.
Hi  Hi, If i select "MM" ,How do i get entity associated(ie materia;,supplied material) to that particular domain...My second drop has static values which then fetch the results for 4 different q... See more...
Hi  Hi, If i select "MM" ,How do i get entity associated(ie materia;,supplied material) to that particular domain...My second drop has static values which then fetch the results for 4 different queries parallely when i select from data entity.then later how do i perform multiselect option for (material and supplied material)
Hello Splunk community, I'm in the process of installing Splunk for the first time on a Windows server. I've followed the official installation guide, but I've encountered an issue during the instal... See more...
Hello Splunk community, I'm in the process of installing Splunk for the first time on a Windows server. I've followed the official installation guide, but I've encountered an issue during the installation process. After running the installer, I received an error message that says 'Error 123: The filename, directory name, or volume label syntax is incorrect.' I've double-checked the installation path and made sure there are no special characters, but I still can't seem to get past this error. Has anyone else experienced this issue during installation? What steps can I take to resolve it and successfully install Splunk on my Windows server? Any help would be greatly appreciated. Thank you!
For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/saved... See more...
For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/savedsearches.conf has a stanza for the "Broken Hosts Alert - by contact" alert. Depending how you use the app, that potentially drives your entire alerting mechanism. Two lines in that file (121 & 130) wrap a built-in search macro in double quotes where they should not exist:   | fillnull value="`default_expected_time`" lateSecs   should be:   | fillnull value=`default_expected_time` lateSecs   The result of this is to assign the string value "`default_expected_time`" to the lateSecs variable, rather than expanding to whatever default integer you configured in the macro. Removing those double quotes from both lines seems to fix the issue. I've also raised an issue on the Hurricane Labs github page below...though activity there is pretty stale and I'm not sure if anyone is looking there... https://github.com/HurricaneLabs/brokenhosts/issues/3
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and S... See more...
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and Site 2 is passive all Site 1 alerts should get enabled automatically. we can search for Site1 host as condition to enable alerts. Case 2 :- Site 2 is active and Site 1 is passive all Site 2 alerts should get enabled automatically. we can search for Site2 host as condition to enable alerts.    
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operatio... See more...
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operationPath\"\:\"(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*" | table path, type, reg 2.  index=club-finder RESP_MARKER | rex field=log "\"operationPath\"\:\"\/(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*\"totalTime\"\:(?<timeTaken>\w+)" | table type, path, timeTaken, reg