All Topics

Top

All Topics

I inherited splunk enterprise installed on ec2 instances.     We have cloud watch sending log data to our splunk -   how do I get that data feed into an index. (not indexers) so confused.
Does anyone feel like we are going to be able to create modern dashboards which allow us to interact with kvstore data in the same way we were able to in simplexml..  the old school dashboards fee... See more...
Does anyone feel like we are going to be able to create modern dashboards which allow us to interact with kvstore data in the same way we were able to in simplexml..  the old school dashboards feel a bit clunky..  The alternative is exposing the datafeeds via rest and using a third party tool such as retool.com to allow the CRUD of kv store items..  The dedicated app to lookup file editing is useful but not great for end user consumption ...  I was wondering what are people using ? Apologies if I missed how to achieve this - most seemed to be old school simplexml dashboards
Hi all,   For our environment we want to ingest VMWare data for ITSI. The documentation tells us we need an OVA, so we installed one. However, this took a long time. And now I see we need a seper... See more...
Hi all,   For our environment we want to ingest VMWare data for ITSI. The documentation tells us we need an OVA, so we installed one. However, this took a long time. And now I see we need a seperate Scheduler since the searchhead is actually on Windows, which I did not notice in time is apparently not possible to combine with a Scheduler. It is also a bit vague to me how the connection works. With the scheduler you link a vCenter, so the scheduler needs a connection to the vCenter as well? And this data is then used on the DCN to also connect with the vCenter? My main question is: Is it possible to skip the scheduler and edit a conf file on the DCN itself to start ingesting VMWare and vCenter data right away? We have a limited time schedule since this is just a test environment and the ITSI license doesn't last forever.  Regards, Tim
I've been running into errors where larger searches are getting cancelled.  I read this could be due to running out of memory.  I looked at my search head which is running on a server with 32 gb but ... See more...
I've been running into errors where larger searches are getting cancelled.  I read this could be due to running out of memory.  I looked at my search head which is running on a server with 32 gb but only using 8gb (numbers from monitoring console) I'm assuming there's some setting to increase how much memory is allocated to splunk but i haven't found it.  I've seen settings for memory per search - is the overall memory calculated from allowed number of searches and memory per search? thanks
Why is the SPLUNK on call app antiquated and does not preserve usernames and passwords?
Hello Friends, I have an interesting query that I would like help on. I have three transactions that we are tracking and I would like to create a graph that has the three transaction time categor... See more...
Hello Friends, I have an interesting query that I would like help on. I have three transactions that we are tracking and I would like to create a graph that has the three transaction time categories and their averages. I am able to graph the three graphs together, and I can do their average individually, but I need help combining them together. My code to show all of the different graphs are: |multisearch [search ( index="a" addinventory InboundInventoryChangeElement ) | eval addTime = if(actionelementname=="AddInventory",strftime(strptime(length,"%H:%M:%S.%f"),"%S.%f") ,length) |where addTime>0 ] [search ( index="a" SWAPinventory InboundInventoryChangeElement ) | eval swapTime = if(actionelementname=="SwapInventory",strftime(strptime(length,"%H:%M:%S.%f"),"%S.%f") ,length) |where swapTime>0 ] [search ( index="a" removeinventory InboundInventoryChangeElement ) | eval removeTime = if(actionelementname=="RemoveInventory",strftime(strptime(length,"%H:%M:%S.%f"),"%S.%f") ,length) |where removeTime>0 ] |table _time, addTime, swapTime, removeTime And here is my search for the averages. index="a" addinventory InboundInventoryChangeElement | eval addTime = strftime(strptime( length,"%H:%M:%S.%f"),"%S.%f") |where addTime>0| table _time, addTime | join [ search index="a" addinventory InboundInventoryChangeElement | eval addTime = strftime(strptime( length,"%H:%M:%S.%f"),"%S.%f") |where addTime>0 |stats avg(addTime) as AverageAddTime] The other two searches are the exact same except it the variables are different for the add, swap, and remove. Any help would be greatly appreciated!  Also, if there is an easier way rather than joins and multisearches, please let me know! Thank you!!!
We have 2 types of orders in the system, some are entered manually by phone and some are processed automatically as they are fed by other systems. The way I can differentiate is by the order timesta... See more...
We have 2 types of orders in the system, some are entered manually by phone and some are processed automatically as they are fed by other systems. The way I can differentiate is by the order timestamps: Phone orders do not contain miliseconds in the order timestamp (2022-09-16T17:07:41Z) Orders filled automatically by other systems contain miliseconds (2022-09-16T16:22:28.573Z) I am calculating the processing delays on these orders but I want to display the results on 2 rows: 1. Phone orders max delays 2. System orders max delays Here is what I am using now: MySearch  | rex field=_raw "(?ms)^(?:[^ \\n]* ){9}\"(?P<TradeDateTS>[^\"]+)" offset_field=_extracted_fields_bounds | rex field=_raw "^(?:[^ \\n]* ){7}\"(?P<StoreTS>[^\"]+)" offset_field=_extracted_fields_bounds0 | eval Delay = (strptime(StoreTS, "%Y-%m-%dT%H:%M:%S.%N"))-(strptime(TradeDateTS, "%Y-%m-%dT%H:%M:%S.%N")) | stats max(Delay) Note: the goal is not to add or remove the miliseconds information    
Hello!!! I am doing calculations for the time it takes when a machine is undergoing maintenance. Right now, I calculated the time in hours it takes for the maintenance, thus how long the machine w... See more...
Hello!!! I am doing calculations for the time it takes when a machine is undergoing maintenance. Right now, I calculated the time in hours it takes for the maintenance, thus how long the machine was not in use, BUT I want the time to only show how much it was down for that specific day. My code will be based on the time column. If machine maintenance starts on one day (i.e., 9/7) but ends the next day (i.e., 9/8), I want one row to show the machine downtime for 9/7 =24 hours- time in hours when machine was being worked on AND a new row to be created for 9/8=24 hours- time down during that day CURRENTLY It gives me the time in hours in a row based on when the machine started to be working on ....current results BUT I want....   Can I have help please!!!! Code: index=.......search..... | eval open = strptime(Open_Date, "%m/%d/%Y %I:%M:%S %P") | eval close = strptime(Close_Date, "%m/%d/%Y %I:%M:%S %P") |eval Diff=round(((close-open)/3600),1) |eval CloseTime=strftime(close,"%Y-%m-%d") |eval OpenTime=strftime(open,"%Y-%m-%d") |eval Time=strftime(_time,"%Y-%m-%d") |table Time OpenTime CloseTime Close_Date Open_Date Diff
We are attempting to use Alert Manager as a way to maintain an audit trail. We open cases and note them before closing but there are some that we'd want to auto suppress. I'm sure I'm missing somethi... See more...
We are attempting to use Alert Manager as a way to maintain an audit trail. We open cases and note them before closing but there are some that we'd want to auto suppress. I'm sure I'm missing something simple here hence the post. The page in Alert Manager says: Usage Use "$result.fieldname$" (without quotes) to refer to a field from results. Can either be used in "field" or "value" Use "$fieldname$" (without quotes) to refer to a metadata field, such as "$title$". Can either be used in "field" or "value" Use "_time" (without quotes) in "field" to compare the incident timestamp against a value (e.g. for maintenance windows) Note: All of the rules configure must be true for a valid suppression.   If I try to just put $title$ in the field space and specify a title, nothing gets suppressed. Do I need to use different language / arguments here? Thank you in advance.    
How to get the ip address of specific host ?
Hello folks, i want to enforce TLS certificate and hostname check. Do you know if  sslCommonNameToCheck option accept wildcard names? Thank you
I was asked to archive search results in a CSV then send those results periodically by email. My solution is to do this in 2 reports. The first report runs the search and appends the results to a loo... See more...
I was asked to archive search results in a CSV then send those results periodically by email. My solution is to do this in 2 reports. The first report runs the search and appends the results to a lookup. The second just grabs the entire lookup (| inputlookup my_lookup.csv) then emails the results as a CSV.  This seems to work fine but I feel like there should be a more elegant solution, like in only one search/report. I'm curious about what others think. 
Hey Team,  I am trying to generate a search which returns a complete set of results from today and then compares it with a search whereby the results only came in between 4-5pm.  I then want to wor... See more...
Hey Team,  I am trying to generate a search which returns a complete set of results from today and then compares it with a search whereby the results only came in between 4-5pm.  I then want to work out the precentage of results which came in between 4-5pm. So far I have:   With the **** being where I think I need to timeframe search? Thanks!
I am running a query where the following fetches the latency above 1000 milliseconds: As you can see the query uses stats and a where clause to yield approximately 60 results  When I try to ... See more...
I am running a query where the following fetches the latency above 1000 milliseconds: As you can see the query uses stats and a where clause to yield approximately 60 results  When I try to timechart this data-replacing stats with streamstats: I am now getting 26K+ events. Why is my timechart not reflecting the 60 results I was fetching when using the stats command? 
So a few days ago I typo'd the index name in the inputs.conf file on serverA running the universal forwarder, and inadvertently sent the log data to IndexB. We discovered it a few hours ago and since... See more...
So a few days ago I typo'd the index name in the inputs.conf file on serverA running the universal forwarder, and inadvertently sent the log data to IndexB. We discovered it a few hours ago and since then I have verified multiple times that serverA's inputs is correct now, restarted splunk. It's been about an hour and this one inputs stanza is still somehow forwarding log data to IndexB instead of IndexA.  I have grepped splunkd.log but all I can see is it parsing the monitored file correctly..nothing about it's destination index. Any help would be appreciated!
Hi, I have a weird problem with some data that is gone after some days but not in a summary index based on the first. I'll explain myself. I have an index in which I use this data to get some res... See more...
Hi, I have a weird problem with some data that is gone after some days but not in a summary index based on the first. I'll explain myself. I have an index in which I use this data to get some results. With my query in a 24h range of time. I created a summary index with almost the same query (saved search) to show similar info on another dashboard (Historic). If you launch both queries (the original and the saved search for the summary index) you get the same number of events but, when I run a search with the summary index, some dates, I find that results with the query launched against the summary index does not fit with the other index. Let's say I get 50 events with the original index and 60 from the summary index. How can that be?? I've been told TRANSACTION command can generate some troubles in using it sometimes. Is this true? I use this command in the original index and in the saved search to feed the summary index, not the one is run to show info based on the summary index.   Thanks. Regards,
I have an app with my alerts. I have risk enabled and it's working however risk isn't showing up in the Edit Correlation Search menu. Is there a setting in a .conf file I am missing? I looked into al... See more...
I have an app with my alerts. I have risk enabled and it's working however risk isn't showing up in the Edit Correlation Search menu. Is there a setting in a .conf file I am missing? I looked into alert_actions.conf but don't see any other rule with that linking to it. Below is my risk setting for one of my rules: action.risk = 1 action.risk.param._risk = [{"risk_object_field": "dest", "risk_object_type": "system", "risk_score": 14}] action.risk.param._risk_message = Wsmprovhost.exe spawned a LOLBAS process on $dest$. action.risk.param._risk_score = 0 action.risk.param.verbose = 0
It is possible to access the script/resource files located with a Splunk app's bin or default directory from the Splunk Rest API or Java SDK? I can get the path to the app, however I cannot figure ... See more...
It is possible to access the script/resource files located with a Splunk app's bin or default directory from the Splunk Rest API or Java SDK? I can get the path to the app, however I cannot figure out a way to access the files within the app itself. Is this possible in Splunk?
Hi everyone, From dbxquery, I retrieve this table: id start_time1 end_time1 start_time2 end_time2 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 21:46:... See more...
Hi everyone, From dbxquery, I retrieve this table: id start_time1 end_time1 start_time2 end_time2 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 21:46:32.0 16/09/2022 09:27:41.0 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 14/09/2022 24:52:03.0 15/09/2022 10:15:56.0 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 10:30:14.0 15/09/2022 10:47:26.0 I want to find the start_time2 that closest to the start_time1, means the 2nd line. How can I do please?   Thanks, Julia
I am trying to an eval with like to assign priority to certain IPs/hosts and running into an issue where the priority is not being assigned. I am using network data to create my ES asset list and I h... See more...
I am trying to an eval with like to assign priority to certain IPs/hosts and running into an issue where the priority is not being assigned. I am using network data to create my ES asset list and I have a lookup that does IP to cidr range and then returns the zone the IP is associated with. Later in my search I rename zone to bunit and right after that I am testing the eval as follows: | eval priority=if(like(bunit,"%foo%"), "critical" , "TBD") As I am testing the search at the end of my search I have: | table ip, mac, nt_host, dns, owner, priority, lat, long, city, country, bunit, category, pci_domain, is_expected, should_timesync, should_update, requires_av, device, interface | search bunit=*foo* I get a list of all foo related bunit events, but the priority field is set to "TBD"   Would appreciate any help - thx