All Topics

Top

All Topics

I believe there is no report Splunk cannot produce, but I'm having trouble with this one. I'd like to generate a report that compares the last 30 days average duration with last 90 days average durat... See more...
I believe there is no report Splunk cannot produce, but I'm having trouble with this one. I'd like to generate a report that compares the last 30 days average duration with last 90 days average duration and shows the increase/decrease. I am having no troubles getting the last 90 day average, but I can't figure out how to include the last 30 day average in the same query... The data I'm working with is similar to this date Job Duration 9/1/2022 Job1    33 9/1/2022 Job2   12 9/1/2022 Job3   128 9/2/2022 Job1   14 9/2/2022 Job2   99 9/2/2022 Job3   128 9/3/2022 Job1   16 9/3/2022 Job2   33 9/3/2022 Job3   22 9/4/2022 Job1  196 9/4/2022 Job2  393 9/4/2022 Job3 192 I'd like a report that looks like this.  Job          All  Days    Last 2 Days Job1        21                17 Job2       44                 35 Job3       28                 17 I can generate the ALL Days, but am not sure how to get the last 2 days.. Heres what I have. search=foo | bucket=_time span=1d | stats sum(duration) as duration by time, jobtype | stats avg(duration) as duration by jobtype Any gurus out there that can help?   
I have  following sample  XML event where I want to extract specific value for a child tag . Ex when <Order fact> value is 'Place' then extract <Orderupdatevalue> which is false <OrderUpdateField> ... See more...
I have  following sample  XML event where I want to extract specific value for a child tag . Ex when <Order fact> value is 'Place' then extract <Orderupdatevalue> which is false <OrderUpdateField>      <OrderFacts>Place</OrderFacts>      <OrderDataType>String</OrderDataType>       <OrderUpdateValue>False</OrderUpdateValue>      <OrderFacts>Number</OrderFacts>      <OrderDataType>Integer</OrderDataType>      <OrderUpdateValue>1</OrderUpdateValue>      <OrderFacts>Location</OrderFacts>     <OrderDataType>String</OrderDataType>     <OrderUpdateValue>Earth</OrderUpdateValue> </OrderUpdateField>
Hi folks, We've been using Phantom for a while now and currently implementing SAML integration. The concerning part is that the objects (assets, playbooks, permissions...) are set to ids instead of ... See more...
Hi folks, We've been using Phantom for a while now and currently implementing SAML integration. The concerning part is that the objects (assets, playbooks, permissions...) are set to ids instead of usernames, so logins via SAML generates new user ids, and we have to remap those objects form that particular local user to the current SAML user id. Is there any way to do that via REST or did anyone ever built a playbook to make that change? My idea is to rename the local users appending a "_local" to the username and ask the users to login via SSO, then have a routine that identifies SAML username = local username +"_local" and move the objects from this local id to the new SAML id. Doable?
Hi Folks, How can i display the results for 2022-09-02 in Result_Prev column and 2022-09-09 in Result column and keeping the other columns same. Calculating Percent difference from a this week to th... See more...
Hi Folks, How can i display the results for 2022-09-02 in Result_Prev column and 2022-09-09 in Result column and keeping the other columns same. Calculating Percent difference from a this week to the previous week results.  any help is appreciated. Thank you !  
I cannot seem to get more than myself and Unassigned for users in Alert Manager. If I switch from "Builtin" to "alert_manager" it will still say Both. The install info for Alert Manager mentions typi... See more...
I cannot seem to get more than myself and Unassigned for users in Alert Manager. If I switch from "Builtin" to "alert_manager" it will still say Both. The install info for Alert Manager mentions typing in alert_manager users but I have nowhere to type these. I did see an older thread from 2018 where the person mentioned inheritance and roles but I couldn't make sense of it. Any help is appreciated, thank you!    
Can someone explain the prestats option within tstats? I have reread the docs a bunch of times but just don't find a clear explanation of what it does other than it is "designed to be consumed by c... See more...
Can someone explain the prestats option within tstats? I have reread the docs a bunch of times but just don't find a clear explanation of what it does other than it is "designed to be consumed by commands that generate aggregate calculations". Below I have 2 very basic queries which are returning vastly different results.   index="myindex" sourcetype="mysource" DRA=B* | rex field=DRA "^(?<pfx>\D\S{1}).*" | timechart count by pfx useother=f limit=0 usenull=f   AND   | tstats prestats=t count where index="myindex" sourcetype="mysourcetype" DRA=B* by DRA, _time | rex field=DRA "^(?<pfx>\D\S{1}).*" | timechart count by pfx useother=f limit=0 usenull=f     I get it -- I must use prestats for the search using tstats but, then, what are the values that it IS returning without prestats? What is prestats doing?   
I am struggling to find a solution where I would like a dashboard tile to turn red if an email is not received by 11.30 every week day morning. The search behind the tile has the timeframe set to T... See more...
I am struggling to find a solution where I would like a dashboard tile to turn red if an email is not received by 11.30 every week day morning. The search behind the tile has the timeframe set to Today so I would like it to be green by default and if Splunk ingests the email before 11.30 am. It should only turn red if the email hasn't been ingested by 11.30 am. This is the base search query: index=msexchange sourcetype="MSExchange:2013:MessageTracking" subject="**** - Success" directionality=Originating psender="********" | stats dc(message_subject) Can you help?
Hello, I am currently receiving firewall data on my heavy forwarder on a specific port number. On the HF there is an simple inputs.conf with [udp://:portnumber] sourcetype=fgt_log index=fw_data... See more...
Hello, I am currently receiving firewall data on my heavy forwarder on a specific port number. On the HF there is an simple inputs.conf with [udp://:portnumber] sourcetype=fgt_log index=fw_data and an outputs.conf that sends all to the indexers. The problem is that i am receiving a lot of garbage traffic (like DNS traffic to 8.8.8.8 or 8.8.4.4). I don't want to index this data. I don't have access to the firewall so i can't just stop it there.  I thought that a blacklist would stop the events from coming. (I tried a simple blacklist like the ones used under the [monitor] stanza. Something like this: blacklist=(dstip=8\.8\.8\.8|dstip=8\.8\.4\.4|service="DNS")) Unfortunately it didn't work... I made some research but i only found the "acceptFrom" that in this situation i don't think it's useful. Came across this post but wasn't useful https://community.splunk.com/t5/Getting-Data-In/Blacklist-a-host-hosts-is-sending-logs-to-Splunk-via-TCP/m-p/289283 Any tips? 
I have a Oracle Api Gateway that I need to monitor and capture some errors. It is a very old version, actually deprecated by Oracle. There are two nodes who have the jvm agents but the information g... See more...
I have a Oracle Api Gateway that I need to monitor and capture some errors. It is a very old version, actually deprecated by Oracle. There are two nodes who have the jvm agents but the information gathered is very poor. Following my provider instructions, I could use the log extension. ¿Anyone have experience with this kind of platform? ¿Which of all logs extensions I could use for this? I'm thinking in this extension: https://developer.cisco.com/codeexchange/github/repo/Appdynamics/log-monitoring-extension/
Hello All, I'm trying since 3 days now to find a solution for my problem but without success. I look around for solutions and already asked questions but I didn't find (or I missed it) any. I have... See more...
Hello All, I'm trying since 3 days now to find a solution for my problem but without success. I look around for solutions and already asked questions but I didn't find (or I missed it) any. I have a monitoring tool that sends me the host problems in splunk in json format. Once the alert/problem ended, the tool send me another message in splunk but with very little informations except of one field that contains a value(alert id) of the initial alert. Here a example: Initial alert: {"timestamp":1662715948,"guid":468431423,"result":1,"eid":1580,"name":"test kouzou","sev":3,"h":[{"host":"toto","name":"toto"}],"team":["titi"],"tags":-[{"tag":"App","value":"System"},{"tag":"App","value":"host-up"}]} Recovery: {"timestamp":1662716608,"guid":604699994,"result":0,"eid":1059134005,"init_eid":1580} The idea is, to make searches for alerts that have not been resolved yet and show them in a dashboard, if the alert has been acknowledged by the tool, I don't want to show it .I'm trying to make equivalency with the "init_eid" and "eid" field but without success. It is a json content and I tried to do some manual extractions in order to guarantee the "normality" of the fields.I realized also that the "transactions" command is not working at all, only for one field. Do you have any ideas on how I could achieve this goal? Thank you in advance. ​
Hi, Does the Splunk App for Amazon Connect have the ability to report/feed into Splunk when people listen or downloading recorded calls in AWS Connect? Thanks Mark
Hello -  I have a table with several columns: Host Src IP Dest IP Src Port Dest Port myHost 10.0.0.1 10.0.0.2 50000 80   I would like to have cell based... See more...
Hello -  I have a table with several columns: Host Src IP Dest IP Src Port Dest Port myHost 10.0.0.1 10.0.0.2 50000 80   I would like to have cell based drills downs.  For example, Host would drill down into a dashboard called host_detail.xml, the rest of the columns would fill the value of the clicked cell to the appropriate filter token. The tokens are called src_ip_tok, dest_ip_tok, src_port_tok, and dest_port_tok.  How would I accomplish this? Thank you.
Splunk documentation said "fillnull command is a distributable streaming command when a field-list is specified. When no field-list is specified, the fillnull command fits into the dataset proces... See more...
Splunk documentation said "fillnull command is a distributable streaming command when a field-list is specified. When no field-list is specified, the fillnull command fits into the dataset processing type"   I wonder why it works as dataset processing if no fields are specified. The results are all the same anyway, but there must be a reason. Thanks for letting us know.
Hi community, a few month ago I have overtaken our Splunk cluster from a colleague who quit his job. Now I have the situation that we dismantle some application server which has an universal forwar... See more...
Hi community, a few month ago I have overtaken our Splunk cluster from a colleague who quit his job. Now I have the situation that we dismantle some application server which has an universal forwarder installed. What are the recommended steps to unsubscribe the forwarder at the Management- & Deploymentserver? Apps and serverclasses are not affected. These are still needed. Thanks in advance for your support. Armin
Can anyone explain me the steps to be followed to convert advanced xml dashboards to simple xml dashboards.
I want to display the number of sent data in certain time in the dashboard. I think the best way is with "Single Value". How can I display the number of search results of a search in the dashboard? ... See more...
I want to display the number of sent data in certain time in the dashboard. I think the best way is with "Single Value". How can I display the number of search results of a search in the dashboard? For example my search ("message.additionalInfo.attributes.properties.receiver-market-partner-id"=12345678) finds 1500 events. How can I display the 1500 in the dashboard as a single value? Thanks a lot! Translated with www.DeepL.com/Translator (free version)
Hi, When I signed up, I did so quickly. Expecting to be spammed into oblivion, I didn't use my full name, but Jane Doe'd it.   I did not realise the first name/last name cannot be changed, nor tha... See more...
Hi, When I signed up, I did so quickly. Expecting to be spammed into oblivion, I didn't use my full name, but Jane Doe'd it.   I did not realise the first name/last name cannot be changed, nor that it would be used everywhere (auto generated certs, exam enrollment?)   Surely somebody can change this for me? I mean, people get married and name changes all the time.  Can somebody point me at who I can contact to change my name?   It's about the full name as displayed in the main portal.
I saw a question on the internet while searching for answers for a separate question and a few comments below regarding the correct answer for that. Now, I am confused as to what  should have been th... See more...
I saw a question on the internet while searching for answers for a separate question and a few comments below regarding the correct answer for that. Now, I am confused as to what  should have been the correct answer. This was the question. This file has been manually created on a universal forwarder: /opt/splunkforwarder/etc/apps/my_TA/local/inputs.conf [monitor:///var/log/messages] sourcetype=syslog index=syslog A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new inputs.conf file: /opt/splunk/etc/deployment-apps/my_TA/local/inputs.conf [monitor:///var/log/maillog] sourcetype=maillog index=syslog Which file is now monitored? /var/log/maillog or both /var/log/maillog and /var/log/messages
I'm using lookup but don't know how to do a partial match instead of an exact match Example: 10.20.30.40 is in the list, and I want to get the result of URL=https://10.20.30.40~, is that possible?
Hello, documentation shows JSON format as a: metadata fields, events field with additional data in it. Format events for HTTP Event Collector - Splunk Documentation   My question is how importa... See more...
Hello, documentation shows JSON format as a: metadata fields, events field with additional data in it. Format events for HTTP Event Collector - Splunk Documentation   My question is how important is to preserve this structure?  Can you remove "event" nesting? That's how events looks in Splunk right now, I have to press on a "+" sign to see the actual message.   If I remove the "event" nesting I can see the main message without extra actions.   P.S. if this is of any importance, data is being transferred to Splunk via TCP, not HTTP.