All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick    Thank you so much for answering my questions over such a long period of time. Thanks to you, I understand what was confusing about the data model. Reading the docs again, I realize... See more...
@PickleRick    Thank you so much for answering my questions over such a long period of time. Thanks to you, I understand what was confusing about the data model. Reading the docs again, I realized I had been thinking in a different direction. thank you.
<earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> This shows you are not using the tokens correctly
Yes. Bucketing data works like it does with normal events within the index. The whole bucket is rolled when the _most recent_ event in the bucket is older than the retention period for the index - th... See more...
Yes. Bucketing data works like it does with normal events within the index. The whole bucket is rolled when the _most recent_ event in the bucket is older than the retention period for the index - that's why you can have data older than retention period in your index. Here it works the same way - if your bucket "overlaps" the boundary of the summary range, the whole bucket will be available.
"I installed splunkforwarder-8.2.9 on Oracle Linux 7.4 and added the Linux add-on to it through the Deployment Server. Although the logs from this server are being received by the HF (we verified thi... See more...
"I installed splunkforwarder-8.2.9 on Oracle Linux 7.4 and added the Linux add-on to it through the Deployment Server. Although the logs from this server are being received by the HF (we verified this using tcpdump), when we search the desired index, we don't see any logs in it. (Our Splunk version is 9.2.1 and UF version is 8.2.9, and due to certain reasons, we cannot use the latest version of UF.)"
I believe I have what is a very simple question, but with all my searching I have been unable to find an answer. I've made a simple dashboard to show successful and failed logins to our application.... See more...
I believe I have what is a very simple question, but with all my searching I have been unable to find an answer. I've made a simple dashboard to show successful and failed logins to our application.  I have created a dropdown/radio button panel with some static options shown below.  I can show all results with an asterisk and only successful logins with 0, but using "!=0" to get everything that doesn't equal 0 doesn't produce any results. I have tried some basic combinations of !=0, !="0", !=="0" here in the Static Options window. What am I missing?  The tutorials I've found don't specifically cover this type of syntax.  Thank you in advance!  
You probably need to ask your SaaS provider what their observability provision options are, because they would probably need to install something on their systems, or give you access to their filesys... See more...
You probably need to ask your SaaS provider what their observability provision options are, because they would probably need to install something on their systems, or give you access to their filesystems (which seems unlikely for a SaaS provision)!
What do you mean by timepicker token correctly - try using $timepicker.earliest$ and $timepicker.latest$ i am using the same, i am not sure what is the issue here: <form version="1.1" theme="dark">... See more...
What do you mean by timepicker token correctly - try using $timepicker.earliest$ and $timepicker.latest$ i am using the same, i am not sure what is the issue here: <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <row> <panel> <table> <search> <query> index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) repoter.dataloadingintiated |stats count by local |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data" |stats count as FilesofDMA] |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) "app.mefwebdata - jobintiated" |eval host = case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) |eval host = host + " - " + host_ip |stats count by host |fields - count |appendpipe [stats count |eval Error="Job didn't run today" |where count==0 |table Error]] |stats values(host) as "Host Data Details", values(Error) as Error, values(local) as "Files created localley on AMP", values(FilesofDMA) as "File sent to DMA" <query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentageRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host_ip> <colorPalette type="map">{"12.234.201.22":#53A051, "10.457.891.34":#53A051,"10.234.34.18":#53A051,"10.123.363.23":#53A051}</colorPalette> </format> <format type="color" field="local"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="FilesofDMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Files created localley on AMP"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette <scale type="threshold">8</scale> </format> <format type="color" field="File sent to DMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Error"> <colorPalette type="map">{"Job didn't run today":#DC4E41}</colorPalette> </format> <format type="color" field="Host Data Details"> <colorPalette type="map">{"HOM-jjderf - 10.123.34.18":#53A051"HOM-iytgh - 10.123.363.23":#53A051, HOP-wghjy - 12.234.201.22":#53A051, "HOP-tyhgt - 12.234.891.34":#53A051}</colorPalette> </format> </table> </panel> </row> </form>
Wow... what a broad question.   What do you mean by integrate? Which direction? Generally you can call REST endpoints and consume whatever comes out, you can also send data there. If you get ... See more...
Wow... what a broad question.   What do you mean by integrate? Which direction? Generally you can call REST endpoints and consume whatever comes out, you can also send data there. If you get data pushed you would have to set up a point (machine) where you can receive the data, process it and forward it to Splunk or use a HEC (HTTP Event Collector) endpoint of a Splunk instance. If the SaaS produces machine readable files, you would be able to consume those as well. So you see that there are various ways. 
You need to use two commands | inputlookup remediation.csv | stats count by knowbe4, solution | rex field=knowbe4 mode=sed "s/<\/?\w+.*?\/?>//g" | rex field=solution mode=sed "s/<\/?\w+.*?\/?>//g"
@ITWhisperer Thanks to you. I have an issue I need to use the same regex on two different fields butit throws an error when i run the below query  | inputlookup remediation.csv | stats count by kno... See more...
@ITWhisperer Thanks to you. I have an issue I need to use the same regex on two different fields butit throws an error when i run the below query  | inputlookup remediation.csv | stats count by knowbe4, solution | rex field=knowbe4 mode=sed "s/<\/?\w+.*?\/?>//g" rex field=solution mode=sed "s/<\/?\w+.*?\/?>//g"  
What are the various methods to integrate 3rd party SaaS applications with Splunk.
Hi @m92, with the table command you have many events with the same srcip and dfferent _time. Do you want different lines if you have different _time? if yes, you can add _time to the BY clause (i... See more...
Hi @m92, with the table command you have many events with the same srcip and dfferent _time. Do you want different lines if you have different _time? if yes, you can add _time to the BY clause (index="index1" Users =* IP=*) OR (index="index2" tag=1 ) | regex Users!="^AAA-[0-9]{5}\$" | eval IP=if(match(IP, "^::ffff:"), replace(IP, "^::ffff:(\d+\.\d+\.\d+\.\d+)$", "\1"), IP) | eval ip=coalesce(IP,srcip) | stats dc(index) AS index_count values(Users) AS Users BY ip _time | where index_count>1 | table Users, ip, _time even if, in this way you could have different _time in the two indexes so it will be difficoult to group by _time. Ciao. Giuseppe
@PickleRick    Thank you very much for your reply. If this is what you said, as shown in the picture below The summary range refers to the period for which data is preserved, and the backfill ran... See more...
@PickleRick    Thank you very much for your reply. If this is what you said, as shown in the picture below The summary range refers to the period for which data is preserved, and the backfill range refers to the period from which summarization launches. Also, can I understand that the fact that there is data that is physically outside the actual summary range is because the summary data is stored in bucket units and rolled in bucket units!?  
Hi PicleRick Thanks for your answer. 1. Is there such a thing as support service? I just posted this hoping to get some kind of help, wherever it comes from.  If there is online support, I'd appre... See more...
Hi PicleRick Thanks for your answer. 1. Is there such a thing as support service? I just posted this hoping to get some kind of help, wherever it comes from.  If there is online support, I'd appreciate if someone could tell me how to contact them. 2. Yes, the environment is old. Strangely enough the only problem I'm having is with the 9.2.1 forwarder. The old 6.x is OK. And the indexers (7.3.3) are a bit old, but they do work. Yes, I have also verified the configs are exactly the same. For that reason, I was wondering whether 9.2.1 required something new or different in the config. Thanks  
Hi @AtherAD , We're deploying UBA and we had an instalation with Red Hat 8.8, with some packets in 8.9 and doesn't run! Splunk Support confirmed that you must ne use that (with the present UBA rele... See more...
Hi @AtherAD , We're deploying UBA and we had an instalation with Red Hat 8.8, with some packets in 8.9 and doesn't run! Splunk Support confirmed that you must ne use that (with the present UBA release) all the packets will be the ones in the certified release, e.g. RedHat 8.8 without any update to greater releases. And you must block all the updates, otherwise the installation will stop to run. Ciao. Giuseppe
Depends on what do you want to "integrate". Do you want to collect events generated by your web app/web server? Do you want to collect metrics about your server? Do you want to embed reports from Spl... See more...
Depends on what do you want to "integrate". Do you want to collect events generated by your web app/web server? Do you want to collect metrics about your server? Do you want to embed reports from Splunk on your website? Do you want to be able to perform some action on your Splunk environment from your web app? Something else?
1. This is not Splunk Support service. This is a volunteer-driven community. 2. Your environment is really old. 3. We don't know what forwarder you're using (UF/HF), what configuration you have for... See more...
1. This is not Splunk Support service. This is a volunteer-driven community. 2. Your environment is really old. 3. We don't know what forwarder you're using (UF/HF), what configuration you have for your inputs and sourcetypes/sources/hosts on your components. So the only answer you can get at the moment is "something's wrong". But seriously - unless it's a forwarder which is built into some (presumably also obsolete) solution, you should definitely update this 6.4 UF to something less ancient. As a side note - how have you verified that configs on those forwarders are "the same"?
| rex field=_raw mode=sed "s/<\/?\w+.*?\/?>//g"
i have the same issue and no solution
OK. Right from the start there are some things that can be improved | table host | dedup host While in this particular case it might not make such difference it's worth remembering that the tab... See more...
OK. Right from the start there are some things that can be improved | table host | dedup host While in this particular case it might not make such difference it's worth remembering that the table command moves processing to the search-head layer so it's best avoided untill you really need to transform your data into table for presentation. I'd do | stats values(host) as host | mvexpand host instead. The annotate=t part is also not needed very much as you only want to set one field. I'm not sure what you're trying to do with this line: | eval multiplehost=mvjoin(host, ", ") I suppose you want it to work differently than it does. You can't "reach" to other result lines with the eval command. So you either need to combine your results into a multivalued field or maybe transpose your results and do a foreach. But this one will not work. Also, unless you have one of each host (not just one "dummy") in the appended part you won't detect the failed ones.