All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Why is the SPLUNK on call app antiquated and does not preserve usernames and passwords?
Hello Friends, I have an interesting query that I would like help on. I have three transactions that we are tracking and I would like to create a graph that has the three transaction time categor... See more...
Hello Friends, I have an interesting query that I would like help on. I have three transactions that we are tracking and I would like to create a graph that has the three transaction time categories and their averages. I am able to graph the three graphs together, and I can do their average individually, but I need help combining them together. My code to show all of the different graphs are: |multisearch [search ( index="a" addinventory InboundInventoryChangeElement ) | eval addTime = if(actionelementname=="AddInventory",strftime(strptime(length,"%H:%M:%S.%f"),"%S.%f") ,length) |where addTime>0 ] [search ( index="a" SWAPinventory InboundInventoryChangeElement ) | eval swapTime = if(actionelementname=="SwapInventory",strftime(strptime(length,"%H:%M:%S.%f"),"%S.%f") ,length) |where swapTime>0 ] [search ( index="a" removeinventory InboundInventoryChangeElement ) | eval removeTime = if(actionelementname=="RemoveInventory",strftime(strptime(length,"%H:%M:%S.%f"),"%S.%f") ,length) |where removeTime>0 ] |table _time, addTime, swapTime, removeTime And here is my search for the averages. index="a" addinventory InboundInventoryChangeElement | eval addTime = strftime(strptime( length,"%H:%M:%S.%f"),"%S.%f") |where addTime>0| table _time, addTime | join [ search index="a" addinventory InboundInventoryChangeElement | eval addTime = strftime(strptime( length,"%H:%M:%S.%f"),"%S.%f") |where addTime>0 |stats avg(addTime) as AverageAddTime] The other two searches are the exact same except it the variables are different for the add, swap, and remove. Any help would be greatly appreciated!  Also, if there is an easier way rather than joins and multisearches, please let me know! Thank you!!!
We have 2 types of orders in the system, some are entered manually by phone and some are processed automatically as they are fed by other systems. The way I can differentiate is by the order timesta... See more...
We have 2 types of orders in the system, some are entered manually by phone and some are processed automatically as they are fed by other systems. The way I can differentiate is by the order timestamps: Phone orders do not contain miliseconds in the order timestamp (2022-09-16T17:07:41Z) Orders filled automatically by other systems contain miliseconds (2022-09-16T16:22:28.573Z) I am calculating the processing delays on these orders but I want to display the results on 2 rows: 1. Phone orders max delays 2. System orders max delays Here is what I am using now: MySearch  | rex field=_raw "(?ms)^(?:[^ \\n]* ){9}\"(?P<TradeDateTS>[^\"]+)" offset_field=_extracted_fields_bounds | rex field=_raw "^(?:[^ \\n]* ){7}\"(?P<StoreTS>[^\"]+)" offset_field=_extracted_fields_bounds0 | eval Delay = (strptime(StoreTS, "%Y-%m-%dT%H:%M:%S.%N"))-(strptime(TradeDateTS, "%Y-%m-%dT%H:%M:%S.%N")) | stats max(Delay) Note: the goal is not to add or remove the miliseconds information    
Hello!!! I am doing calculations for the time it takes when a machine is undergoing maintenance. Right now, I calculated the time in hours it takes for the maintenance, thus how long the machine w... See more...
Hello!!! I am doing calculations for the time it takes when a machine is undergoing maintenance. Right now, I calculated the time in hours it takes for the maintenance, thus how long the machine was not in use, BUT I want the time to only show how much it was down for that specific day. My code will be based on the time column. If machine maintenance starts on one day (i.e., 9/7) but ends the next day (i.e., 9/8), I want one row to show the machine downtime for 9/7 =24 hours- time in hours when machine was being worked on AND a new row to be created for 9/8=24 hours- time down during that day CURRENTLY It gives me the time in hours in a row based on when the machine started to be working on ....current results BUT I want....   Can I have help please!!!! Code: index=.......search..... | eval open = strptime(Open_Date, "%m/%d/%Y %I:%M:%S %P") | eval close = strptime(Close_Date, "%m/%d/%Y %I:%M:%S %P") |eval Diff=round(((close-open)/3600),1) |eval CloseTime=strftime(close,"%Y-%m-%d") |eval OpenTime=strftime(open,"%Y-%m-%d") |eval Time=strftime(_time,"%Y-%m-%d") |table Time OpenTime CloseTime Close_Date Open_Date Diff
We are attempting to use Alert Manager as a way to maintain an audit trail. We open cases and note them before closing but there are some that we'd want to auto suppress. I'm sure I'm missing somethi... See more...
We are attempting to use Alert Manager as a way to maintain an audit trail. We open cases and note them before closing but there are some that we'd want to auto suppress. I'm sure I'm missing something simple here hence the post. The page in Alert Manager says: Usage Use "$result.fieldname$" (without quotes) to refer to a field from results. Can either be used in "field" or "value" Use "$fieldname$" (without quotes) to refer to a metadata field, such as "$title$". Can either be used in "field" or "value" Use "_time" (without quotes) in "field" to compare the incident timestamp against a value (e.g. for maintenance windows) Note: All of the rules configure must be true for a valid suppression.   If I try to just put $title$ in the field space and specify a title, nothing gets suppressed. Do I need to use different language / arguments here? Thank you in advance.    
How to get the ip address of specific host ?
Hello folks, i want to enforce TLS certificate and hostname check. Do you know if  sslCommonNameToCheck option accept wildcard names? Thank you
I was asked to archive search results in a CSV then send those results periodically by email. My solution is to do this in 2 reports. The first report runs the search and appends the results to a loo... See more...
I was asked to archive search results in a CSV then send those results periodically by email. My solution is to do this in 2 reports. The first report runs the search and appends the results to a lookup. The second just grabs the entire lookup (| inputlookup my_lookup.csv) then emails the results as a CSV.  This seems to work fine but I feel like there should be a more elegant solution, like in only one search/report. I'm curious about what others think. 
Hey Team,  I am trying to generate a search which returns a complete set of results from today and then compares it with a search whereby the results only came in between 4-5pm.  I then want to wor... See more...
Hey Team,  I am trying to generate a search which returns a complete set of results from today and then compares it with a search whereby the results only came in between 4-5pm.  I then want to work out the precentage of results which came in between 4-5pm. So far I have:   With the **** being where I think I need to timeframe search? Thanks!
I am running a query where the following fetches the latency above 1000 milliseconds: As you can see the query uses stats and a where clause to yield approximately 60 results  When I try to ... See more...
I am running a query where the following fetches the latency above 1000 milliseconds: As you can see the query uses stats and a where clause to yield approximately 60 results  When I try to timechart this data-replacing stats with streamstats: I am now getting 26K+ events. Why is my timechart not reflecting the 60 results I was fetching when using the stats command? 
So a few days ago I typo'd the index name in the inputs.conf file on serverA running the universal forwarder, and inadvertently sent the log data to IndexB. We discovered it a few hours ago and since... See more...
So a few days ago I typo'd the index name in the inputs.conf file on serverA running the universal forwarder, and inadvertently sent the log data to IndexB. We discovered it a few hours ago and since then I have verified multiple times that serverA's inputs is correct now, restarted splunk. It's been about an hour and this one inputs stanza is still somehow forwarding log data to IndexB instead of IndexA.  I have grepped splunkd.log but all I can see is it parsing the monitored file correctly..nothing about it's destination index. Any help would be appreciated!
Hi, I have a weird problem with some data that is gone after some days but not in a summary index based on the first. I'll explain myself. I have an index in which I use this data to get some res... See more...
Hi, I have a weird problem with some data that is gone after some days but not in a summary index based on the first. I'll explain myself. I have an index in which I use this data to get some results. With my query in a 24h range of time. I created a summary index with almost the same query (saved search) to show similar info on another dashboard (Historic). If you launch both queries (the original and the saved search for the summary index) you get the same number of events but, when I run a search with the summary index, some dates, I find that results with the query launched against the summary index does not fit with the other index. Let's say I get 50 events with the original index and 60 from the summary index. How can that be?? I've been told TRANSACTION command can generate some troubles in using it sometimes. Is this true? I use this command in the original index and in the saved search to feed the summary index, not the one is run to show info based on the summary index.   Thanks. Regards,
I have an app with my alerts. I have risk enabled and it's working however risk isn't showing up in the Edit Correlation Search menu. Is there a setting in a .conf file I am missing? I looked into al... See more...
I have an app with my alerts. I have risk enabled and it's working however risk isn't showing up in the Edit Correlation Search menu. Is there a setting in a .conf file I am missing? I looked into alert_actions.conf but don't see any other rule with that linking to it. Below is my risk setting for one of my rules: action.risk = 1 action.risk.param._risk = [{"risk_object_field": "dest", "risk_object_type": "system", "risk_score": 14}] action.risk.param._risk_message = Wsmprovhost.exe spawned a LOLBAS process on $dest$. action.risk.param._risk_score = 0 action.risk.param.verbose = 0
It is possible to access the script/resource files located with a Splunk app's bin or default directory from the Splunk Rest API or Java SDK? I can get the path to the app, however I cannot figure ... See more...
It is possible to access the script/resource files located with a Splunk app's bin or default directory from the Splunk Rest API or Java SDK? I can get the path to the app, however I cannot figure out a way to access the files within the app itself. Is this possible in Splunk?
Hi everyone, From dbxquery, I retrieve this table: id start_time1 end_time1 start_time2 end_time2 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 21:46:... See more...
Hi everyone, From dbxquery, I retrieve this table: id start_time1 end_time1 start_time2 end_time2 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 21:46:32.0 16/09/2022 09:27:41.0 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 14/09/2022 24:52:03.0 15/09/2022 10:15:56.0 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 10:30:14.0 15/09/2022 10:47:26.0 I want to find the start_time2 that closest to the start_time1, means the 2nd line. How can I do please?   Thanks, Julia
I am trying to an eval with like to assign priority to certain IPs/hosts and running into an issue where the priority is not being assigned. I am using network data to create my ES asset list and I h... See more...
I am trying to an eval with like to assign priority to certain IPs/hosts and running into an issue where the priority is not being assigned. I am using network data to create my ES asset list and I have a lookup that does IP to cidr range and then returns the zone the IP is associated with. Later in my search I rename zone to bunit and right after that I am testing the eval as follows: | eval priority=if(like(bunit,"%foo%"), "critical" , "TBD") As I am testing the search at the end of my search I have: | table ip, mac, nt_host, dns, owner, priority, lat, long, city, country, bunit, category, pci_domain, is_expected, should_timesync, should_update, requires_av, device, interface | search bunit=*foo* I get a list of all foo related bunit events, but the priority field is set to "TBD"   Would appreciate any help - thx
Hi, I'm trying to use match-pattern - regex inside the app-agent-config.xml in our java microservice, but it does not work properly. E.g.: <sensitive-url-filter delimiter="/" ... See more...
Hi, I'm trying to use match-pattern - regex inside the app-agent-config.xml in our java microservice, but it does not work properly. E.g.: <sensitive-url-filter delimiter="/" segment="3,4,5,6" match-filter="REGEX" match-pattern=":" param-pattern="myParam|myAnotherParam"/> this should mask selected segments that contains : but it masks everything. If I do match-pattern="=" it works as expected (masking segment that contains "=" in the string) Another examples that do not work (they mask everything): match-pattern=":" match-pattern="\x3A" (3A is ":" in ASCII table) match-pattern="[^a-z¦-]+" (should return true if there is anything other than lower letters and "-") match-pattern=":|=" Thank you Best regards, Alex Oliveira
I'm setting my IDP service with SAML, SSO(Single-sign on). In Documentation, they say splunk cloud provides JIT(Just-In Time) provisioning, but I can't find JIT provisioning section.   These ar... See more...
I'm setting my IDP service with SAML, SSO(Single-sign on). In Documentation, they say splunk cloud provides JIT(Just-In Time) provisioning, but I can't find JIT provisioning section.   These are what I reffered pages. https://docs.splunk.com/Documentation/SCS/current/Admin/IntegrateIdP#Just-in-time_provisioning_to_join_users_to_your_tenant_automatically https://docs.splunk.com/Documentation/SCS/current/Admin/IntegrateAzure   I'm using free trial now. Can this be problem? Does the JIT provisioning need any other plans? Or, am I not good at finding where the JIT provisioning button?   Please answer me. Thank you.
Hi,  I would like to send a report  via Splunk automatically on the last day of each month.  In this case, I am afraid that I need to use cron schedule. Dose anyone have an idea? Thanks in advanc... See more...
Hi,  I would like to send a report  via Splunk automatically on the last day of each month.  In this case, I am afraid that I need to use cron schedule. Dose anyone have an idea? Thanks in advance! Tong  
Considering 2022-06 as starting month,  If month is 2022-07, i should assign 2022-06's corresponding field values " greater_6_mon" to 2022-07's field "prev" , likewise to 2022-08 as well Here are ... See more...
Considering 2022-06 as starting month,  If month is 2022-07, i should assign 2022-06's corresponding field values " greater_6_mon" to 2022-07's field "prev" , likewise to 2022-08 as well Here are my values : month            prev          greater_6_mon 2022-06                                    26 2022-07                                      2 2022-08                                      1 expected result: (please suggest) month            prev      greater_6_mon 2022-06            0             26 2022-07           26            2 2022-08            2              1