All Topics

Top

All Topics

Hi, i have lookup which list out all red hat linux. for example, in my lookup have red hat 7, red hat 8 and so on. i need to correlate OS log with the lookup. but my OS log is not standardized as be... See more...
Hi, i have lookup which list out all red hat linux. for example, in my lookup have red hat 7, red hat 8 and so on. i need to correlate OS log with the lookup. but my OS log is not standardized as below: Red Hat Linux Enterprise 7.1, Red Hat Linux Enterprise Server 8.6 and so on. How do i make it as standardized OS as lookup above using regex. Please assist on this. Thank you
I have some logs coming into splunk and there are parsing correctly without any issues Index= xxx sourcetype=splunk-logs But now the logs time zone changed now i have to update the time zone in pro... See more...
I have some logs coming into splunk and there are parsing correctly without any issues Index= xxx sourcetype=splunk-logs But now the logs time zone changed now i have to update the time zone in props.conf So where can I find this existing sourcetype=splunk-logs in splunk  
My inputs.conf on the rasberryPi look like this:   [monitor:///var/log/pihole.log] disabled = 0 sourcetype = pihole index = main [monitor:///var/log/pihole-FTL.log] disabled = 0 sourcetype = p... See more...
My inputs.conf on the rasberryPi look like this:   [monitor:///var/log/pihole.log] disabled = 0 sourcetype = pihole index = main [monitor:///var/log/pihole-FTL.log] disabled = 0 sourcetype = pihole:ftl index = main     Both log files exist in /var/log, but only one sourcetype gets sent to my indexer and that`s "pihole:ftl". Any assistance would be greatly appreciated.
I can't understand this, all other stuff works great, i receive all the information i enabled, i have installed these apps both on forwarders and search heads, all that is missing is the "savedsearch... See more...
I can't understand this, all other stuff works great, i receive all the information i enabled, i have installed these apps both on forwarders and search heads, all that is missing is the "savedsearches.conf". I would appreciate suggestions because for the moment is very important to obtain these searches for me.
I  have events with the following keys: key1, key2 & key3. I would like to get the change events i.e. events that their key1, key2 & key3 values are not in the events of previous day.  What shoul... See more...
I  have events with the following keys: key1, key2 & key3. I would like to get the change events i.e. events that their key1, key2 & key3 values are not in the events of previous day.  What should the query look like?
I have a dropdown with two values PROD and TEST. Based on my selection in my panels in the dashboard I have to choose a different index for my search. How can I do this? Example of two searches: (wh... See more...
I have a dropdown with two values PROD and TEST. Based on my selection in my panels in the dashboard I have to choose a different index for my search. How can I do this? Example of two searches: (which also includes other tokens. These can be ignored. Both searches work if I directly put in the right index 1/  index=<IF PROD then AAA_prod_index else AAA_test_index> sourcetype IN (abc:edge:api, abc:edge:api)  proxy!="ow*" $client_token$ $target_token$ | rex mode=sed field=proxy "s#^(.*?)_(.*)$#*_\2#" | stats count by proxy 2/ index=<IF PROD then BBB_prod_index else BBB_test_index> sourcetype=accesslog  tenant=$tenant_token$ | stats count by HTTPStatusCode
I recently upgraded or rather installed a Splunk UF version 9.1.1 which communicates back to Splunk Cloud but I seem to get an Unsupported error on the console. Is using version 9.1.1. of a forwarder... See more...
I recently upgraded or rather installed a Splunk UF version 9.1.1 which communicates back to Splunk Cloud but I seem to get an Unsupported error on the console. Is using version 9.1.1. of a forwarder not supported with the below Splunk Cloud? Version:9.0.2209.3    Build:ec7eaea0bba6  Experience: Victoria
Hi Friends, I am trying to create a bar chart with trends (as line) for numbers of tickets received for every month. I need to show the data label for only one month in the chart. Please let me know... See more...
Hi Friends, I am trying to create a bar chart with trends (as line) for numbers of tickets received for every month. I need to show the data label for only one month in the chart. Please let me know how we can achieve this.  Currently, it shows the data label for all months but I need to show it for the first month alone. Thanks.
All the timestamps in the JSON we receive are UTC, but the TA ignores the time zone in the ISO 8601 string, so it defaults to local time. Thus, all our events are timestamped several hours into the f... See more...
All the timestamps in the JSON we receive are UTC, but the TA ignores the time zone in the ISO 8601 string, so it defaults to local time. Thus, all our events are timestamped several hours into the future. I noticed that the timestamps Google provides vary from millisecond to nanosecond precision, but trailing zeros are truncated before the "Z" is tacked on. This makes it difficult to specify a time format with a trailing time zone that will work for every event. But instead, shouldn't all the source types have TZ = UTC in props? Am I the only one with this problem?
Good Morning! I rarely get to dabble in SPL, and as such, some (probably simple) things stump me.  That is what brought me here today. I have a scenario in which I need to pull SYSLOG events from a... See more...
Good Morning! I rarely get to dabble in SPL, and as such, some (probably simple) things stump me.  That is what brought me here today. I have a scenario in which I need to pull SYSLOG events from a series of machines that all report the field names.  One of those machines is the authoritative source of values, which all of the other systems should have.  As an example, I have 3 machines... M1, M2, M3, and each machine reports three field/value pairs... sync-timestamp, version-number, machine-name. I need to compare the sync-timestamp of M1 with the sync-timestamp of the other two machines.  My idea is to assign the "sync-timestamp value WHERE computer-name=M1" to a variable by which to compare the other two machines' values.  I intend to use this report to ultimately create an alert, so we know if machines are not syncing properly. I just cannot figure out the syntax to make this happen.  Can anyone provide some guidance on this? Thank you in advance!
I have the following script, but it keeps erroring out. def connect_to_splunk(username,password,host='http://xxxxxxxx.splunkcloud.com',port='8089',owner='admin',app='search',sharing='user'     t... See more...
I have the following script, but it keeps erroring out. def connect_to_splunk(username,password,host='http://xxxxxxxx.splunkcloud.com',port='8089',owner='admin',app='search',sharing='user'     try:         service=client.connect(username=username,password=password,host=host,port=port,owner=owner,app='search',sharing=sharing)         if splunk_service:             print("Splunk login successful")             print("......................" )     except Exception as e:         print(e) def main():     try:         splunk_service = connect_to_splunk(username='xxxxxx',password='xxxxxxx')     except Exception as e:         print(e)     There is no error from the debugger (Using Visual Studio).  Would appreciate any assistance.
Greetings. I'm trying to count all calls in this: index="my_data" resourceId="sip*" "CONNECTED" Where not in this: index="my_data" resourceId="sip*" "ENDED" This works when the latter is <1... See more...
Greetings. I'm trying to count all calls in this: index="my_data" resourceId="sip*" "CONNECTED" Where not in this: index="my_data" resourceId="sip*" "ENDED" This works when the latter is <10k (subsearch)   index="my_data" resourceId="sip*" "CONNECTED" NOT [ search index="my_data" resourceId="sip*" "ENDED" | table guid ]   And I can use a join for more than >10k because the TOTAL is not 10k (join limits)   index="my_data" resourceId="sip*" "CONNECTED" | table guid meta | join type=left guid [ search index="my_data" resourceId="sip*" "ENDED" | table guid timestamp ] | search NOT timestamp="*"    But neither 'feel' great. I'm making my way through the PDF found here but not figured out 'the best' way to do this (if such a thing exists). https://community.splunk.com/t5/Splunk-Search/how-to-exclude-the-subsearch-result-from-main-search/m-p/572567 So while there are several questions related to 'excluding subsearch' results, I have not found many that help with this 10k issue (subsearch results more than 10k and a join works, as long as my total values is less than 10k). PLUS - joins are kinda sucky, amirite?  I mean, that's like what the first things that Nick Mealy says in that pdf. So just looking for more options to try and learn! Thank you!  
DevOps teams can use Experience Journey Maps and Dashboards to diagnose and solve issues more quickly, while Business teams use them to gauge event costs and business impact ... See more...
DevOps teams can use Experience Journey Maps and Dashboards to diagnose and solve issues more quickly, while Business teams use them to gauge event costs and business impact   CONTENTS  Introduction | Video | About the presenter | Resources Video Length: 2 min 29 seconds  Ensure each user has the best mobile experience possible by understanding mobile user journeys. Bring teams together to understand how a user interacts with the mobile application in order to streamline operations and improve their experience.   By leveraging data collectors, teams can gain an even deeper understanding of specific events—for example, items in a lost shopping cart, along with their associated revenue. This information helps build a conversion chart, giving the business an indication of how severe a problem may be, to then help prioritize remediation efforts.  About presenter Tori Beaird (Forbess)  Tori Beaird (Forbess)Tori Beaird (Forbess) joined AppDynamics as a Sales Engineer in 2020. With an Industrial Distribution Engineering degree and a decade of musical theatre training - sales engineering offered the best of both worlds. Although she is a Texas native, she helps customers up and down the West Coast improve their application monitoring practices.   Within AppDynamics, Tori is a part of the Cisco Cloud Observability Champions team, enabling peers and customers alike on AppDynamics’ latest monitoring tool. With a passion for teaching others, Tori continues to develop and present internal training sessions to the broader Cisco organization.   When Tori isn't at work, you will find her flying Cessna 182s, volunteering with her church, and spending time with her beloved husband and friends!    Additional Resources  Learn more about Mobile Real User Monitoring and BusinessiQ and Analytics in the documentation. 
Windows domain controller Server not reporting win security events in Splunkcloud We have a Windows Server acting as a Domain Controller, the Splunk forwarder is installed on this server and it forw... See more...
Windows domain controller Server not reporting win security events in Splunkcloud We have a Windows Server acting as a Domain Controller, the Splunk forwarder is installed on this server and it forwards to our local onpremise Heavy Forwarder which then uploads to Splunk cloud. The Windows domain controller server in question is displaying Windows event logs for application and system but not for security. So it is partially working but somehow the security events are not making it to the cloud. However it was working completely working 100% fine before, and later stopped working. Around the time it stopped working, we added it to msad (for domain controller specific inputs) but did not make any other changes. 
We have set up cluster monitoring for K8s cluster to monitor when pods get killed, failed, etc. The Alert we get looks like the following: We have hundreds of pods and namespaces in the cluster ... See more...
We have set up cluster monitoring for K8s cluster to monitor when pods get killed, failed, etc. The Alert we get looks like the following: We have hundreds of pods and namespaces in the cluster and I would like the alert summary to contain namespace and pod name, otherwise I don't know if it's something I can ignore or not. For example is someone is in a dev namespace testing a new app the pod alerts might go off a lot and there is not way to know if this is a production namespace/pod without going into the app. And When I wake up in the morning with 100 email alerts for pods failing i don't know if prod id falling over or someone set up a new namespace to test... It doesn't seem to be possible to get the pod name, even though it's available in the dashboard view of the event: All I want is Email/HTML template to include the namespace and Pod name, but that doesn't seem to be possible today.
I have an alert that fires and while generating the alert, uses appendpipe to collect fields and generate an event in another index for collection by a third party tool. Is there a way to add the ... See more...
I have an alert that fires and while generating the alert, uses appendpipe to collect fields and generate an event in another index for collection by a third party tool. Is there a way to add the View Results link to the event that's generated so that it can map it in our third party tool to link the analysts back to the original alert?
I understand that there are 2 approved architectures for multi site search head clustering. One, where each site has their own independent search head clustering that has search affinity with index c... See more...
I understand that there are 2 approved architectures for multi site search head clustering. One, where each site has their own independent search head clustering that has search affinity with index clusters, and a second option where there is a search head cluster stretched across the two sites. For the first option where the search head clusters are independent to each site, I have read that search head clusters are not site-aware. Does this mean that things saved through the search head cluster on site 1 would not replicate to site 2? For example, if I were to create a new dashboard at site 1 on the web UI through the search head cluster, that would not replicate to site 2?
Hello fellow Splunkthiasts! I need some insights to understand how comparison functions in mstats could be used. Consider the following query:   | mstats latest(cpu_metric.*) as * WHERE index="osn... See more...
Hello fellow Splunkthiasts! I need some insights to understand how comparison functions in mstats could be used. Consider the following query:   | mstats latest(cpu_metric.*) as * WHERE index="osnix_metrics" sourcetype=cpu_metric CPU=all BY host | where pctUser > 50   As expected, it returns a list of hosts having latest CPU usage value higher than 50%. However, according to mstats command reference, I can have comparison expression within WHERE clause and I'd expect it would be more efficient to rewrite the above query like this:   | mstats latest(cpu_metric.*) as * WHERE index="osnix_metrics" sourcetype=cpu_metric CPU=all pctUser > 50 BY host   Unfortunately, this doesn't return any results. I tried to refer to metric before aggregation with no luck:   | mstats latest(cpu_metric.*) as * WHERE index="osnix_metrics" sourcetype=cpu_metric CPU=all cpu_metric.pctUser > 50 BY host   What am I missing here?
Hello Guys, I have weird problem with Javascript after the latest upgrade(8.2.8 to 9.0.6). Javascript Code     var queryResults = smAlerteGetter.data("results"); console.log("Sear... See more...
Hello Guys, I have weird problem with Javascript after the latest upgrade(8.2.8 to 9.0.6). Javascript Code     var queryResults = smAlerteGetter.data("results"); console.log("Search done", queryResults); console.log("pimba - ---- " + JSON.stringify(queryResults)); // when we have the result queryResults.on("data", function() { console.log("Data received");     We should received the events and should see the log "Data received". The query goes well and we can see in the Activity Jobs that we received our events. However we have other splunk apps with similar scripts that have the correct behavior. Do we miss something in our app or configurations related to Javascript ? Please help!
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splun... See more...
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splunk on that particular SH. If i check the SH cluster status only 3 servers are showing now. Splunk installed version: 9.0.4.1 for error visibility Please find the attached.  Regards, Siva.