All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @HugheJass , try adding the field to use in the condition, e.g. All my_field=* Successful my_field=0 Failed my_field!=0 Ciao. Giuseppe
There are two main command's for lookup's   | inputlookup my_lookup  | lookup my_lookup  - (This is mainly used for enrichment) So start with the  | inputlookup my_lookup  command, if you ca... See more...
There are two main command's for lookup's   | inputlookup my_lookup  | lookup my_lookup  - (This is mainly used for enrichment) So start with the  | inputlookup my_lookup  command, if you can't  see it it's most likely due permissions or the definitions has not ben set. The lookup is a knowledge object and requires permissions, so could be private or shared, of you may have to to to the app its running under . So check this under SplunkGUI>settings>lookups and check lookup table files for the file and then under definitions. Once you have the definition or csv name try that in the | inputlookup command.     This assumes you have created the lookup file and it has permissions 
By the way, the forwarder is a Universal one. I created the config using the splunk commands on the command line, which I created based on Forwarder1. I did verify it by comparing the inputs.conf a... See more...
By the way, the forwarder is a Universal one. I created the config using the splunk commands on the command line, which I created based on Forwarder1. I did verify it by comparing the inputs.conf and outputs.conf files. They are exactly the same. I just changed the host name. The data from Forwarder 2 does get sent to the Indexers correctly, but it just goes into the wrong index. That's the only problem I have.  Both indexers have a props.conf with a stanza named with the source_type and a TRANSFORMS-routetoindex which points to a stanza in a transfroms.conf. The source_type is exactly the same in both Forwarders. Not sure if this will give you a clue to the cause of the problem. Thanks
got it its a typo error we used token correctly($timepicker.earliest$ and $timepicker.latest$)  but data is not matching in dashboard panel and when i open in search may i know what is the issue h... See more...
got it its a typo error we used token correctly($timepicker.earliest$ and $timepicker.latest$)  but data is not matching in dashboard panel and when i open in search may i know what is the issue here.
There is no such thing as "a .csv file saved in SPLUNK, which I believe is indexed ". CSV can be used as a lookup or its contents might have been ingested and indexed but then you need to know how a... See more...
There is no such thing as "a .csv file saved in SPLUNK, which I believe is indexed ". CSV can be used as a lookup or its contents might have been ingested and indexed but then you need to know how and where to it was indexed so that you can look for data from it.
1. If possible, avoid using screenshots. Paste your code into preformatted paragraph or a code block - it's much easier to read/respond this way. 2. Unless I'm blind you don't show how you're using ... See more...
1. If possible, avoid using screenshots. Paste your code into preformatted paragraph or a code block - it's much easier to read/respond this way. 2. Unless I'm blind you don't show how you're using this token.
Hello,  I have a really basic question I have a .csv file saved in SPLUNK, which I believe is indexed - this is not an output of a search but a file feed into SPLUNK from another source. I want t... See more...
Hello,  I have a really basic question I have a .csv file saved in SPLUNK, which I believe is indexed - this is not an output of a search but a file feed into SPLUNK from another source. I want to be able to open the file in SPLUNK search. Can you please advise what command I should use in SPLUNK search to be able to see the content of the .csv? thank you. 
There can be multiple reasons. You're mentioning a HF - do you mean that your events go UF -> HF -> indexer(s)? Are you getting _any_ events from this UF? (especially own UF's logs into _internal i... See more...
There can be multiple reasons. You're mentioning a HF - do you mean that your events go UF -> HF -> indexer(s)? Are you getting _any_ events from this UF? (especially own UF's logs into _internal index) Are you getting any other events through that HF?
I've been trying to get a new Developer License for more than a week and getting the same error message. I've also sent an email to devinfo@splunk.com but have not heard back yet.  Is there any othe... See more...
I've been trying to get a new Developer License for more than a week and getting the same error message. I've also sent an email to devinfo@splunk.com but have not heard back yet.  Is there any other way of getting a Developer License? Error: Developer License Request Error An error occurred while requesting a developer license. Please try again. If this error continues to occur, contact devinfo@splunk.com for assistance.    
@apietsch I want to onboard a SaaS application data to Splunk. What is the process? I think first would be to integrate the SaaS application add on with Splunk. That's the integration I'm talking ab... See more...
@apietsch I want to onboard a SaaS application data to Splunk. What is the process? I think first would be to integrate the SaaS application add on with Splunk. That's the integration I'm talking about.
1. Well, if you have a valid contract with Splunk you're entitled to support. The support portal is here -> https://splunk.my.site.com/customer/s/ (but as far as I remember, you need to have an accou... See more...
1. Well, if you have a valid contract with Splunk you're entitled to support. The support portal is here -> https://splunk.my.site.com/customer/s/ (but as far as I remember, you need to have an account associated with a valid active support contract so not just anyone can request support on behalf of your organization; I might be wrong though here, you need to verify that). 2. Since the 7.x line has been unsupported for some years now, it's hard to find compatibility matrix for such an old indexer and new forwarder. It generally should work, but it's definitely not a supported configuration (at the moment only supported indexer versions are 9.x).  But as long as both ends can negotiate supported s2s protocol version, they should be relatively fine. _How_ did you verify the configs? btool?  
@PickleRick    Thank you so much for answering my questions over such a long period of time. Thanks to you, I understand what was confusing about the data model. Reading the docs again, I realize... See more...
@PickleRick    Thank you so much for answering my questions over such a long period of time. Thanks to you, I understand what was confusing about the data model. Reading the docs again, I realized I had been thinking in a different direction. thank you.
<earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> This shows you are not using the tokens correctly
Yes. Bucketing data works like it does with normal events within the index. The whole bucket is rolled when the _most recent_ event in the bucket is older than the retention period for the index - th... See more...
Yes. Bucketing data works like it does with normal events within the index. The whole bucket is rolled when the _most recent_ event in the bucket is older than the retention period for the index - that's why you can have data older than retention period in your index. Here it works the same way - if your bucket "overlaps" the boundary of the summary range, the whole bucket will be available.
"I installed splunkforwarder-8.2.9 on Oracle Linux 7.4 and added the Linux add-on to it through the Deployment Server. Although the logs from this server are being received by the HF (we verified thi... See more...
"I installed splunkforwarder-8.2.9 on Oracle Linux 7.4 and added the Linux add-on to it through the Deployment Server. Although the logs from this server are being received by the HF (we verified this using tcpdump), when we search the desired index, we don't see any logs in it. (Our Splunk version is 9.2.1 and UF version is 8.2.9, and due to certain reasons, we cannot use the latest version of UF.)"
I believe I have what is a very simple question, but with all my searching I have been unable to find an answer. I've made a simple dashboard to show successful and failed logins to our application.... See more...
I believe I have what is a very simple question, but with all my searching I have been unable to find an answer. I've made a simple dashboard to show successful and failed logins to our application.  I have created a dropdown/radio button panel with some static options shown below.  I can show all results with an asterisk and only successful logins with 0, but using "!=0" to get everything that doesn't equal 0 doesn't produce any results. I have tried some basic combinations of !=0, !="0", !=="0" here in the Static Options window. What am I missing?  The tutorials I've found don't specifically cover this type of syntax.  Thank you in advance!  
You probably need to ask your SaaS provider what their observability provision options are, because they would probably need to install something on their systems, or give you access to their filesys... See more...
You probably need to ask your SaaS provider what their observability provision options are, because they would probably need to install something on their systems, or give you access to their filesystems (which seems unlikely for a SaaS provision)!
What do you mean by timepicker token correctly - try using $timepicker.earliest$ and $timepicker.latest$ i am using the same, i am not sure what is the issue here: <form version="1.1" theme="dark">... See more...
What do you mean by timepicker token correctly - try using $timepicker.earliest$ and $timepicker.latest$ i am using the same, i am not sure what is the issue here: <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <row> <panel> <table> <search> <query> index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) repoter.dataloadingintiated |stats count by local |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data" |stats count as FilesofDMA] |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) "app.mefwebdata - jobintiated" |eval host = case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) |eval host = host + " - " + host_ip |stats count by host |fields - count |appendpipe [stats count |eval Error="Job didn't run today" |where count==0 |table Error]] |stats values(host) as "Host Data Details", values(Error) as Error, values(local) as "Files created localley on AMP", values(FilesofDMA) as "File sent to DMA" <query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentageRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host_ip> <colorPalette type="map">{"12.234.201.22":#53A051, "10.457.891.34":#53A051,"10.234.34.18":#53A051,"10.123.363.23":#53A051}</colorPalette> </format> <format type="color" field="local"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="FilesofDMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Files created localley on AMP"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette <scale type="threshold">8</scale> </format> <format type="color" field="File sent to DMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Error"> <colorPalette type="map">{"Job didn't run today":#DC4E41}</colorPalette> </format> <format type="color" field="Host Data Details"> <colorPalette type="map">{"HOM-jjderf - 10.123.34.18":#53A051"HOM-iytgh - 10.123.363.23":#53A051, HOP-wghjy - 12.234.201.22":#53A051, "HOP-tyhgt - 12.234.891.34":#53A051}</colorPalette> </format> </table> </panel> </row> </form>
Wow... what a broad question.   What do you mean by integrate? Which direction? Generally you can call REST endpoints and consume whatever comes out, you can also send data there. If you get ... See more...
Wow... what a broad question.   What do you mean by integrate? Which direction? Generally you can call REST endpoints and consume whatever comes out, you can also send data there. If you get data pushed you would have to set up a point (machine) where you can receive the data, process it and forward it to Splunk or use a HEC (HTTP Event Collector) endpoint of a Splunk instance. If the SaaS produces machine readable files, you would be able to consume those as well. So you see that there are various ways. 
You need to use two commands | inputlookup remediation.csv | stats count by knowbe4, solution | rex field=knowbe4 mode=sed "s/<\/?\w+.*?\/?>//g" | rex field=solution mode=sed "s/<\/?\w+.*?\/?>//g"