All Topics

Top

All Topics

Is there a way to get logs in JSON format for an API call from a Springboot Application?
Hello, Our use case is to add a viz which is a URL for an interactive map. New Jersey County Map  This map should be displayed on the dashboard at all times. When a user clicks on any County, a da... See more...
Hello, Our use case is to add a viz which is a URL for an interactive map. New Jersey County Map  This map should be displayed on the dashboard at all times. When a user clicks on any County, a data table will open in the lower left of the window/panel/viz. Tried inserting it as an image, and changed the domain for images in the web.conf. dashboards_csp_allowed_domains = *.njogis-newjersey.opendata.arcgis.com But since it is not really an image, but an image rendered on a webpage, this didn't work - received this error. With Classic DB we used I-Frame on occasion. But this was kludgy at best. We haven't worked with REST API. But could that be a possible solution? Thanks in advance and God bless, Genesius
Hi, My overall goal is to create a resulting data table with headings including HourOfDay, BucketMinuteOfHour, DayOfWeek, and source, as well as creating an upperBound and lowerBound. My curr... See more...
Hi, My overall goal is to create a resulting data table with headings including HourOfDay, BucketMinuteOfHour, DayOfWeek, and source, as well as creating an upperBound and lowerBound. My current query is as follows: index="akamai" sourcetype=akamaisiem | eval time = _time | eval time=strptime(time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | eval HourOfDay=strftime(time, "%H") | eval BucketMinuteOfHour=strftime(time, "%M") | eval DayOfWeek=strftime(time, "%A") | stats avg(count) as avg stdev(count) as stdev by HourOfDay,BucketMinuteOfHour,DayOfWeek,source | eval lowerBound=(avg-stdev*exact(2)), upperBound=(avg+stdev*exact(2)) | fields lowerBound,upperBound,HourOfDay,BucketMinuteOfHour,DayOfWeek,source | outputlookup state.csv However, it produces zero results. Can you please help? I am using the following article as a guide as this is for an anomaly detection project: https://www.splunk.com/en_us/blog/platform/cyclical-statistical-forecasts-and-anomalies-part-1.html I appreciate any help. tHANKS!
Just starting out with provisioning splunk 9.x via AWS AMI and Terraform.  Does anyone have any idea if it is possible to change the admin pwd using a user_data script on the AMI?  I found one mentio... See more...
Just starting out with provisioning splunk 9.x via AWS AMI and Terraform.  Does anyone have any idea if it is possible to change the admin pwd using a user_data script on the AMI?  I found one mention  of using export password="<password>" but that didn't seem to work; it still used the default SPLUNK-$instance id$ We would like to have the pwd changed during provisioning, if possible.  Thanks.
Hi, I am running the following query to check seasonality in my index: index="ABC | timechart count by _time | timechart Error in 'timechart' command: Repeated group-by field '_time'. The sea... See more...
Hi, I am running the following query to check seasonality in my index: index="ABC | timechart count by _time | timechart Error in 'timechart' command: Repeated group-by field '_time'. The search job has failed due to an error. You may be able view the job in the Job Inspector.However, I am receiving the following error and I do not understand it at all: Can you please help? Many thanks!
Hi, I am writing a query here to calculate the expected frequency of data in an index : index=ABC | eval time_diff=_time-lag(_time) | stats avg(time_diff) as avg_time_diff   However, when ... See more...
Hi, I am writing a query here to calculate the expected frequency of data in an index : index=ABC | eval time_diff=_time-lag(_time) | stats avg(time_diff) as avg_time_diff   However, when I try and run it, I receive the following error message:   Error in 'eval' command: The 'lag' function is unsupported or undefined. The search job has failed due to an error. You may be able view the job in the Job Inspector.   Can you please help?
Hi, I'm trying to create a correlation search in splunk unable to figure out options Time range  earliest time/latest time/ cron shedule could any one explain from scratch  what if i want to shedu... See more...
Hi, I'm trying to create a correlation search in splunk unable to figure out options Time range  earliest time/latest time/ cron shedule could any one explain from scratch  what if i want to shedule a search earliest time to 1h30min what i have to mention  thanks
Hi All, Good day, I have juniper data in Splunk using sourcetype = juniper* but need some searches to create dashboards which are useful for juniper team to check if any down or outage  could y... See more...
Hi All, Good day, I have juniper data in Splunk using sourcetype = juniper* but need some searches to create dashboards which are useful for juniper team to check if any down or outage  could you please tell me any  searches for this juniper dashboard 
I have a user table which shows which department each user belongs to. I want to join this with another table on User so i can get the respective department for each user. However, I would like to ha... See more...
I have a user table which shows which department each user belongs to. I want to join this with another table on User so i can get the respective department for each user. However, I would like to have the headcount of each department showing as well. The below code doesn't work but if it makes sense, i would like to achieve something like that     index=... | join type=left user [| inputlookup lookup | rename cn as user | stats count(user) as headcount by department] | table logon_time user department headcount          
I have a lookup which in column A is the index and column B is the number of hosts, I have this as  a lookup. I would like to be able to query the number of hosts per index I have i.e. if I have thre... See more...
I have a lookup which in column A is the index and column B is the number of hosts, I have this as  a lookup. I would like to be able to query the number of hosts per index I have i.e. if I have three hosts in my lookup but splunk returns two I would like to see that number. Probably a difficult query but one I am struggling with - thanks in advance!
Currently my Heavy Forwarder is receiving unwanted logs from a lot of different devices, and it is taking up a lot of space. Is there a way to prevent / reject logs from all servers, and manually a... See more...
Currently my Heavy Forwarder is receiving unwanted logs from a lot of different devices, and it is taking up a lot of space. Is there a way to prevent / reject logs from all servers, and manually add server logs that we want to monitor into whitelist, so that it doesn't take up free space in my Heavy Forwarder.   So basically, reject all logs from all server. Accept logs only from server1 and server2.    Thank you in advance.
index=na160 starttime="02/02/2023:00:00:00" endtime="02/02/2023:24:00:00" requestId="TID:131610985000004c2d" |stats count as 240_COUNT by logRecordType | join logRecordType type=outer [search inde... See more...
index=na160 starttime="02/02/2023:00:00:00" endtime="02/02/2023:24:00:00" requestId="TID:131610985000004c2d" |stats count as 240_COUNT by logRecordType | join logRecordType type=outer [search index=na160 starttime="02/08/2023:00:00:00" endtime="02/08/2023:24:00:00" requestId="TID:348627200000212ea7" | stats count as 242_COUNT by logRecordType] | eval difference = (242_COUNT - 240_COUNT) | table logRecordType, 240_COUNT, 242_COUNT, difference Above eval fails after joining two dataset  Error in 'eval' command: The expression is malformed.  Appreciate your help here to mitigate this issue. 
Is there a delay in the Splunk API server 'seeing' events that are already indexed? I use the Splunk API to query logs for some testcases. I can submit a job to the API server (`POST https://<SERV... See more...
Is there a delay in the Splunk API server 'seeing' events that are already indexed? I use the Splunk API to query logs for some testcases. I can submit a job to the API server (`POST https://<SERVER>:8089/services/search/jobs`). That works fine. But intermittently, the search job returns no results (GET https://<SERVER>:8089/services/search/jobs/<JOB_ID>/results returns a 204/No Content HTTP header, and no HTTP body)  I checked if there was an indexing delay using the command below. Apparently there was not - the relevant logs were ingested and indexed well in time. It's just the Splunk API server that intermittently returns no results.      <SPLUNK QUERY> | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S")       Any pointers to how I can dig into this further? I'm just a dev, not a Splunk admin, so guidelines on what I do next are much appreciated.
Hi all - this one is hurting my brain. I need to pull two distinct numbers from my events: one with a total count of assets, and one with a total count of assets that contain a vulnerability. What... See more...
Hi all - this one is hurting my brain. I need to pull two distinct numbers from my events: one with a total count of assets, and one with a total count of assets that contain a vulnerability. What I think it should look like is not working:     | (stats dc(AssetNames) AS TotalExternalAssets, (dc(Asset_Names) AS TotalExposedAssets | where vulnerability!="missing"))       How do I get these two counts out of my events?
This is not a question, rather I am sharing something that I discovered with a Splunk OnDemand support call. I thought I was a bit of a Splunk pro, but just goes to show there's always something to... See more...
This is not a question, rather I am sharing something that I discovered with a Splunk OnDemand support call. I thought I was a bit of a Splunk pro, but just goes to show there's always something to learn and this one was so simple it's a little embarrassing    Imagine you have a very large lookup with 5 million+ rows (in my case a KV store, containing an extract from Maxmind DB with some other internal references added). If you want to do a CIDR match on this, you set up a lookup definition with Match Type "CIDR(network)". Now run your query (IP address obfuscated in example)     | makeresults | eval ip4 = "127.0.0.1" | lookup maxmind network AS ip4     For me on Splunk Cloud, this takes around 50 seconds, and I contacted Splunk Support for some assistance in making this viable (50 seconds just way too high).   I already understood that the issue is that the cidrmatch has to run on every single row - I added a pre filter to my lookup definition and proved that with the pre filter it ran much much faster, but obviously that limited it's use to just the filtered rows.   I tried messing around with inputlookup, but couldn't get it any better     | inputlookup maxmind | where country_name="Japan" city_name="Osaka"     This still took the same ~50 seconds to run   Of course if I had of read the documentation properly, I would have seen that "where" is actually an argument of the inputlookup clause. Changing this to:     | inputlookup maxmind where country_name="Japan" city_name="Osaka"     Made all the difference - this now runs in 5-7 seconds as the WHERE clause is now running the same as if you had added the pre-filter to the lookup definition.   To use this in a search to enrich other data, you can use:     | makeresults | eval ip4 = "127.0.0.1" | appendcols [| inputlookup maxmind where country_name="Japan" city_name="Osaka"] | where cidrmatch(network, ip4)     Obviously, the tighter you can get your WHERE clause the faster this runs - you can also use accelerated fields in your lookup (if using KV store) to further enhance, this will depend entirely on your data and how you can filter it down to the smallest possible data set before continuing.   For me, using the same 5 million row KV store and maxmind data as my example     | makeresults | eval ip4 = "127.0.0.1" | appendcols [| inputlookup maxmind where country_name="Japan" city_name="Osaka" postal_code="541-0051" network_cidr="127.0.*"] | where cidrmatch(network, ip4)     runs in ~0.179 seconds [using actual IP address, not the fake one above]. Your mileage may vary, but I hope this helps someone else trying to figure this out. I haven't tried the same with a CSV lookup, but I imagine it would be very similar.
We had Splunk working on a domain and joined a different domain and then this happend to our search head cluster  
When starting splunk: Splunk web-interface doesnt open and goes to blank page but I can go to 127.0.01:8000 attached is snippit of issue
Hi i am new,  I have 2 excel documents, one containing firewall logs and the other containing Sys logs. how would i combine the data in splunk so i can view them on one page I want to compare whe... See more...
Hi i am new,  I have 2 excel documents, one containing firewall logs and the other containing Sys logs. how would i combine the data in splunk so i can view them on one page I want to compare when the firewall  was used (and its destination IP) to when FTP was used (from syslogs).   Thank you
I have tried multiple methods to get a complex URL to work as a redirect. Unfortunately, there are a lot of different base URLs in my actual data so using multiple conditions is probably not a good o... See more...
I have tried multiple methods to get a complex URL to work as a redirect. Unfortunately, there are a lot of different base URLs in my actual data so using multiple conditions is probably not a good option. It seems as though it wants no "/" in the URL field, only parameters. <dashboard> <label>URL Test</label> <row> <panel> <table> <search> <query>| makeresults count=3 | streamstats count | eval url = case(count=1,"https://bing.com",count=2,"https://google.com",count=3,"https://nvd.nist.gov/vuln/search/results?form_type=Advanced&amp;results_type=overview&amp;search_type=all&amp;cpe_vendor=cpe%3A%2F%3Aapache&amp;cpe_product=cpe%3A%2F%3Aapache%3Alog4net&amp;cpe_version=cpe%3A%2F%3Aapache%3Alog4net%3A1.2.9.0") | eval site = case(count=1,"bing",count=2,"google",count=3,"nist") | table site, url</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <fields>["site"]</fields> <drilldown> <eval token="u">replace($row.url$, "https://", ""</eval> <link target="_blank"> <![CDATA[ https://$u$ ]]> </link> </drilldown> </table> </panel> </row> </dashboard>  
Hello,   How do I effectively whitelist events like excessive failed logins, and abnormal new processes? These are known, non malicious issues in our network that generate a lot of hits that do n... See more...
Hello,   How do I effectively whitelist events like excessive failed logins, and abnormal new processes? These are known, non malicious issues in our network that generate a lot of hits that do not amount to anything upon extensive investigation. Thanks in advance.