All Topics

Top

All Topics

I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResul... See more...
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResults"> <query> | makeresults | eval expectedResults="My Item 1", "My Item 2", "My Item 3" | makemv delim="," expectedResults | mvexpand expectedResults | table expectedResults </query> <done> <set token="expectedResults">$result.expectedResults$</set> </done> </search> Then I have multiple panels that will get results from different sources, pseudo-coded here: index="my_index_1"  query | table actualResults | stats values(actualResults) as actualResults Assume that the query returns "My Item 1" and "My Item 2". I am not sure how to compare the values returned from my query against the base list, to give something that reports whether it matches each value. My Item 1 True My Item 2 True My Item 3 False
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am t... See more...
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am trying to run the command in SplunkDB connect. Below is the snippet for reference. Below is the query index=db_connect_dev_data |rename PROCESS_DT as Date | table OFFICE,Date,MOP,Total_Volume,Total_Value | search OFFICE=GB1 |eval _time=strptime(Date,"%Y-%m-%d") |addinfo |eval info_min_time=info_min_time-3600,info_max_time=info_max_time-3600 |where _time>=info_min_time AND _time<=info_max_time |table Date,MOP,OFFICE,Total_Volume,Total_Value | addcoltotals "Total_Volume" "Total_Value" label=Total_GB1 labelfield=MOP |filldown | eval Total_Value_USD=Total_Value/1000000 | eval Total_Value_USD=round(Total_Value_USD,5) | stats sum(Total_Volume) as "Total_Volume",sum("Total_Value_USD") as Total_Value(mn) by MOP |search MOP=* |table MOP,Total_Volume,Total_Value(mn) Let me know if anyone know why it is happening,
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm ... See more...
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, in my case what is the most recommended way to set the initial admin credentials, do I have to access every instance and define a "user-seed.conf" file under $SPLUNK_HOME/etc/system/local and then restart the instance, or is there an automated way to set the password across all instances by leveraging the helm chart.
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is... See more...
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is no field as Output due to which i get "no results found". Is there a logic to remove this token passed from the code. |search $form.app_tkn$ Category="A event" Type=$form.eventType$ Output=$form.output$
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for eac... See more...
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for each day 
I know there is Splunk Add-on for AWS, but I heard there is a simpler and easier way to read the buckets directly without using that Add-on. Is that true?  
Hello,  I am trying to create a custom view (also via Xpath) from EventViewer and later insert it into Splunk via a "WinEventLog" and leveraging the Windows Addon. Can it be done using "WinEven... See more...
Hello,  I am trying to create a custom view (also via Xpath) from EventViewer and later insert it into Splunk via a "WinEventLog" and leveraging the Windows Addon. Can it be done using "WinEventLog" or some other way in inputs.conf as it is for Application/Security/System?  [WinEventLog://MyCustomLog] As suggested here I tried this configuration but no logs were onboarded and it returned no error also in _internal logs.  Has anyone found a custom solution for inserting these newly created custom views from the EventViewer to Splunk? Thanks
Hello team, We need to migrate deployment server from azure cloud to on-premise with new IP and Hostname. Please suggest in which .conf file we have to do changes of new IP and Hostname and also we... See more...
Hello team, We need to migrate deployment server from azure cloud to on-premise with new IP and Hostname. Please suggest in which .conf file we have to do changes of new IP and Hostname and also we need to check the license as deployment server has master license so where this master license file contains ? Kindly suggest. 
hi, I'am lily. I want to get network traffic datas from keysight vision e10s(smart tab device). how to get it using stream forwarder?
Hi All,  We have an Monitoring console and due to a recent release, we observed all the, aggregator queue, typing queue & index queue fill ratio has reached 100%.  I have checked indexer performance... See more...
Hi All,  We have an Monitoring console and due to a recent release, we observed all the, aggregator queue, typing queue & index queue fill ratio has reached 100%.  I have checked indexer performance dashboards in monitoring console, and I wasn't able to find out any relevant error which might have caused it. The data ingestion rate through licensing console looked same as we have every day & Can someone, please point me right steps to troubleshoot this? Thanks. 
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the ... See more...
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the S3 test message that is always sent first by S3 event notifications could not be parsed.   Splunk on EC2 is given KMS decryption privileges as shown below.   "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "sqs:*", "s3:*", "kms:Decrypt" ], "Resource": [ "arn:aws:sqs:ap-northeast-1:*************:poc-splunk-vpcflowlog*", "arn:aws:s3:::poc-splunk-vpcflowlog", "arn:aws:s3:::poc-splunk-vpcflowlog/*"     What could be the cause?
Hello! We keep going over our license usage. We cant seem to find what is causing us to go over. we've gone over 3 times now. Any suggestion on how to find what is causing this, please?
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, co... See more...
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, could you help me to get this TA please? If currently the TA doesn't lives in the Splunkbase, could you send me the TA via email please? Regards in advance! Carlos Martínez. carloshugo.martinez@edenred.com Edenred.
Hi Splunkers, I have a problem with a Per-Event Index Routing use case. In involved environment, there are some data currently collected in a index named ot. Here we have some logs that must be spl... See more...
Hi Splunkers, I have a problem with a Per-Event Index Routing use case. In involved environment, there are some data currently collected in a index named ot. Here we have some logs that must be splitted and redirect to other indexes, with naming convention ot_<tecnology>. Inputs.conf involved file is placed under a dedicated app, named simply customer_inputs. The procedure to use is very clear for us: we created, inside above app, props.conf and transforms.conf and worked with key and regex. The strange behavior is this: if we work to redirect one kind of logs, it works perfectly. When we add another log subset, nothing works properly. Let me share you an example.  Scenario 1 In this case, we want: Windows logs must go on ot_windows index. All remaining logs still must go to ot index. We can identify involved logs based on ports; they are coming as network input on port 514 udp, with CEF format. First, our props.conf   [source::udp:514] TRANSFORMS-ot_windows = windows_logs   Second, our transofrms.conf   [windows_logs] SOURCE_KEY = _raw REGEX = <our_regex> DEST_KEY = _MetaData:Index FORMAT = ot_windows   This configuration works fine: Windows logs goes in ot_windows index, all remaining ones still go on ot index. Then, we try another configuration, explained on second scenario. Scenario 2 In this case, we want: Nozomi logs must go on ot_nozomi index. All remaining logs still must go to ot index. Again, we can identify involved logs based on ports; they are coming as network input on port 514 udp, with CEF format. First, our props.conf   [source::udp:514] TRANSFORMS-ot_nozomi = nozomi_logs   Second, out transforms.conf   [nozomi_logs] SOURCE_KEY = _raw REGEX = <our_second_regex> DEST_KEY = _MetaData:Index FORMAT = ot_nozomi   Again, this conf works fine: all Nozomi logs go on dedicated index, ot_nozomi, while all remaining one still go on ot index.  ISSUE So, if we set one of above conf, we got expected behavior. By the way, when we try to merge above confs, nothing works: logs, both Windows and Nozomi, continue to go on ot index. Due they work fine when they are "single", we suspect error is not on regex used, but on how we perform merge. Currently, our merged conf files looks like this: props.conf   [source::udp:514] TRANSFORMS-ot_windows = windows_logs TRANSFORMS-ot_nozomi = nozomi_logs   transforms.conf   [windows_logs] SOURCE_KEY = _raw REGEX = <our_regex> DEST_KEY = _MetaData:Index FORMAT = ot_windows [nozomi_logs] SOURCE_KEY = _raw REGEX = <our_second_regex> DEST_KEY = _MetaData:Index FORMAT = ot_nozomi   Is our assumption right? If yes, what is the correct merge structure?
I installed spunk enterprise on a server named s1.  I installed a forwarder on server f1. Both Windows Server 2019. When I go into Forwarder Management, s1 sees f1, but I can't DO anything with it.... See more...
I installed spunk enterprise on a server named s1.  I installed a forwarder on server f1. Both Windows Server 2019. When I go into Forwarder Management, s1 sees f1, but I can't DO anything with it.  There's nothing on the Forwarder Management screen to CONFIGURE.   If I go to Settings | Data Inputs and try to configure "Remote Performance monitoring" (just as a test, just to monitor something), it says it's going to use WMI and that I should use a forwarder instead. Yes, please.  I want to use a forwarder instead.  I want to user my new forwarder, but I just don't see how.  
Hello Splunk Community! In March, Splunk Community Office Hours spotlighted our fabulous Splunk Threat Research Team for the first time. This team of security content experts is dedicated to develo... See more...
Hello Splunk Community! In March, Splunk Community Office Hours spotlighted our fabulous Splunk Threat Research Team for the first time. This team of security content experts is dedicated to developing out-of-the-box detections to provide comprehensive visibility, empower accurate detection with contextual insights, and enhance operational efficiency. This ensures you can always stay ahead of threats. With our premium security solutions — Splunk Enterprise Security and Splunk SOAR — you can strengthen and unify your security operations, and reduce Mean Time to Respond. We hosted two Office Hour sessions with the threat research experts: The first session focused on Generative AI, where our experts @@James Young and Kumar Sharad discussed Splunk’s best practices for AI and common use cases for Splunk Enterprise Security and SOAR. They explored the integration of AI/ML into Splunk products and offered their recommendations on the approach. They delved into how Gen AI could support SOC processes, including threats, anomaly detection and more. The discussion also covered data privacy and sensitivity, topics of significant interest today! The second session, led by our threat research experts @Jose Hernandez and @Michael Haag, centered on Threat Detection and Response Content. This session highlighted how to leverage the latest security content to automatically monitor your data for findings. Our experts began with the basics, sharing the best approach to getting started with security content, and then answered more specific questions, like the best automation achievable for creating incidents with BMC Remedy Ticketing Tool. @Michael provided a thorough demo on enabling and implementing security content at the session's end, which could be very helpful to optimizing your operational process. To listen to conversations and find the answers for all these questions, feel free to check out our on-demand session recordings:  Generative AI Threat Detection and Content Response  If you have any questions regarding these topics, please join our #office-hours Slack channel for further discussions. You’ll also find links to previous session Q&A decks and live recordings. If you are not yet a member of our splunk-usergroups workspace, you can request access here. Missed the previous events? No worries! Subscribe to the Community Office Hours page to receive notifications for upcoming events, like Detecting Remote Code Executions with the Splunk threat research team on June 5th at 1pm PT/4pm ET! Join us and ask your questions directly to the experts!  Cheers!
Below is the regex used, here we want to extract following fields: DIM TID APPLICATION POSITION CORRLATIONID The rex which i used is extraction DIM, TDI, APPLICATION as one field, but we need t... See more...
Below is the regex used, here we want to extract following fields: DIM TID APPLICATION POSITION CORRLATIONID The rex which i used is extraction DIM, TDI, APPLICATION as one field, but we need them separately. We need to write the rex generic so that it should capture the data if there are different field names as well  
So I'm trying to use #splunkcloud to make calls to a Restful API for which there is no add-on or app available on Splunk Base.   There is nothing under Settings>Data Inputs>Local Inputs to accomplish... See more...
So I'm trying to use #splunkcloud to make calls to a Restful API for which there is no add-on or app available on Splunk Base.   There is nothing under Settings>Data Inputs>Local Inputs to accomplish this task...which kind of blows my mind.  Anyone find a solutions for this or something similar?  TIA
Hi, We get the following exceptions while trying to load APM agent 24.3 in WebLogic 14.1: java.lang.IllegalAccessError: class jdk.jfr.internal.SecuritySupport$$Lambda$225/0x0000000800979c40 (in mod... See more...
Hi, We get the following exceptions while trying to load APM agent 24.3 in WebLogic 14.1: java.lang.IllegalAccessError: class jdk.jfr.internal.SecuritySupport$$Lambda$225/0x0000000800979c40 (in module jdk.jfr) cannot access class com.singularity.ee.agent.appagent.entrypoint.bciengine.FastMethodInterceptorDelegatorBoot (in unnamed module @0x2205a05d) because module jdk.jfr does not read unnamed module @0x2205a05d  java.lang.IllegalStateException: Unable to perform operation: create on weblogic.diagnostics.instrumentation.InstrumentationManager The WebLogic managed server won't start after throwing these exceptions. Any insights on what might be causing these errors? Thanks, Roberto
I don't see checkbox as part of the inputs list. It is possible in simple xml but would like to know how it can be achieved using dashboard studio?