All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is there a question or are you just reporting this? If it is a question, you should provide more information about what you have tried, and what the actual errors are.
Hi Giuseppe thank you for your help it worked
Thanks for answering. What do you mean by " editing the lookup_edit file "? I am in  "Settings - User Interface - Views" but don't see any way to edit any file or where to alter the text you refer to... See more...
Thanks for answering. What do you mean by " editing the lookup_edit file "? I am in  "Settings - User Interface - Views" but don't see any way to edit any file or where to alter the text you refer to. Where is this file I should edit?
Does anyone know where I can find information on installing and configuring ESET TA and the app Linux Splunk enterprise (Debian) and Windows Eset Administrator ? I don't have any information on in... See more...
Does anyone know where I can find information on installing and configuring ESET TA and the app Linux Splunk enterprise (Debian) and Windows Eset Administrator ? I don't have any information on installing on newer versions compatible with 9.0.5 Splunk Enterprise. Despite having configured according to the logs and syslog eset, I do not see any logs arriving on my search head. https://help.eset.com/protect_admin/90/en-US/admin_server_settings_syslog.html https://splunkbase.splunk.com/app/3931/ https://splunkbase.splunk.com/app/3867/#/details Or https://splunkbase.splunk.com/app/6808
Each time I run a search query and click visualisation, the default is "column chart". How do I set this to default to "line chart" for myself, and how do I set this for other users? Thanks in adva... See more...
Each time I run a search query and click visualisation, the default is "column chart". How do I set this to default to "line chart" for myself, and how do I set this for other users? Thanks in advance
Hi @splunkreal , for my knowledge different Indexers, not indexes. There's no sense to duplicate logs in two indexes of the same Indexers. But you have to set the same index name beacuse you reall... See more...
Hi @splunkreal , for my knowledge different Indexers, not indexes. There's no sense to duplicate logs in two indexes of the same Indexers. But you have to set the same index name beacuse you really set on index in the input stanza. If you want to have a different index name on the second Indexers, you have to override this value on it.  Ciao. Giuseppe
Hi @gcusello  great thanks, however may it work if we set different index for the secondary stanza?
Yes, you are right. Second search is useless. index=nessus Risk=Medium earliest=-9d latest=now | stats values(CVE) as CVE_9d by extracted_Host | eval Status=case(isnull(CVE_9d) AND isnotnull(CVE... See more...
Yes, you are right. Second search is useless. index=nessus Risk=Medium earliest=-9d latest=now | stats values(CVE) as CVE_9d by extracted_Host | eval Status=case(isnull(CVE_9d) AND isnotnull(CVE_now), "New",isnotnull(CVE_9d) AND isnull(CVE_now), "Finished", isnotnull(CVE_9d) AND isnotnull(CVE_now), "Not Changed") | table extracted_Host, Status My Problem is still here. I import vulnerability scans Logs into Splunk and get the Information about which Host has an open CVE and how high critical it is.  I want the output if the scanned Host is in the old scan for 7 days and not in the new scan today and get the output that the Host is "Finished", same output for Host if is in the old scan and the new scan then i want the output "Unchanged", same output for Host when it is not in the old scan but in the new scan then i want the output "New" . I need the Information that i can build a Dashboard and see which Host and CVE are "done" , which one are "still open" and which Host and CVE are "New" because i must give the information/Ticket to the Server Admins.  Thanks  
I was fighting with the query, as it kept on giving me results, but seems I overlooked the fact that the "off" trigger happend twice and the other only once     Great! Thanks a lot
CAT to Splunk Logs Failing: host = 161.209.202.108 user = sv_cat port = 22 Start time: 10/24/2023 at 4:21am 
Hi @splunkreal , it shoudl be possible using two _TCP_ROUTING items in the inputs.conf pointing to the two different destinations, obviously in different Indexers. but in this way you pay twice the... See more...
Hi @splunkreal , it shoudl be possible using two _TCP_ROUTING items in the inputs.conf pointing to the two different destinations, obviously in different Indexers. but in this way you pay twice the license because data is indexed twice. for more infos see at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input  Ciao. Giuseppe
Hi @bambarita , whic url did you used? please, try this: https://splunk.my.site.com/customer/s Ciao. Giuseppe
Hello As far I understand, the Splunk datamodel has two main goals 1)  Data models enable users of Pivot to create compelling reports and dashboards without designing the searches that generate the... See more...
Hello As far I understand, the Splunk datamodel has two main goals 1)  Data models enable users of Pivot to create compelling reports and dashboards without designing the searches that generate them.  So, the Pivot tool lets to report on a specific data set without the Splunk Search Processing Language  2) It's possible to refer to the CIM data models to normalize different name of data having the same function In this case, we need to normalize data by using tags, alias, eventtypes, etc... Alerts Application State Authentication Certificates Databases Data Loss Prevention Email Interprocess Messaging Intrusion Detection Inventory Java Virtual Machines Malware Network Resolution (DNS) Network Sessions Network Traffic Performance Ticket Management Updates Vulnerabilities Web Is it correct? Thanks
Hi @waJesu , yes, in the right top of the Splunk Editor dashboad there's the Import button. Ciao. Giuseppe
Hi @Abass42 ,.. tried with rex and its working good.  | makeresults | eval _raw="10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130,DuBlIn_,11.11.119.111,port_80=True,port_443=Tr... See more...
Hi @Abass42 ,.. tried with rex and its working good.  | makeresults | eval _raw="10/24/2023 06:00:04,source=SXXXX-88880000,destination=10.10.100.130,DuBlIn_,11.11.119.111,port_80=True,port_443=True,port_21=False,port_22=True,port_25=False,port_53=False,port_554=False,port_139=False,port_445=False,port_123=False,port_3389=False" | extract | rex max_match=5 field=_raw "port\_(?P<open_ports>\d+)\=True" | mvexpand open_ports | table _time, destination, gpss_src_ip, open_ports
Nice document @_JP ... thanks for sharing.    the trouble with newbies is that, they want one person to hold their hands and walk with them together.(literally).  If we say "I can only show you th... See more...
Nice document @_JP ... thanks for sharing.    the trouble with newbies is that, they want one person to hold their hands and walk with them together.(literally).  If we say "I can only show you the door, you only should decide and walk thru it(the great Morpheus)", still they want us to walk with them (holding their hands). !!!
Just in case, if anyone looking for a proof of @richgalloway 's line..  "Splunk only supports HTTP 1.1" (and no http 2 yet(Oct 2023): Pls goto server conf file and search for "http 1" (and/or "htt... See more...
Just in case, if anyone looking for a proof of @richgalloway 's line..  "Splunk only supports HTTP 1.1" (and no http 2 yet(Oct 2023): Pls goto server conf file and search for "http 1" (and/or "http 2") https://docs.splunk.com/Documentation/Splunk/9.1.1/admin/serverconf  
This isn't a question, rather just a place to drop a PDF I put together that I titled "Bare Bones Splunk"   I've seen a lot of people try and get started with Splunk, but then get stuck right after... See more...
This isn't a question, rather just a place to drop a PDF I put together that I titled "Bare Bones Splunk"   I've seen a lot of people try and get started with Splunk, but then get stuck right after getting Splunk Enterprise installed on their local machine. It can be daunting to log into Splunk for the first time and know what the heck you should do.  A person can get through the install to the What Happens Next page, and be pretty overwhelmed with what to do next: Learn SPL and search?  What should they search?  How should they start getting their data in?  What sort of data should I start getting in?  What dashboard should I build? They've started...but need that ah-ha example to see how this tool will fit into their existing environment and workflow. The attached Bare_Bones_Splunk.pdf file guides the reader from the point of install to using the data already being indexed in index=_internal to replicate a few common use cases of Splunk: Monitor a web server Monitor an application server Monitor security incidents The examples are really simple, and the resulting dashboard created in the tutorial is a poor example of something your boss might want (or not...how observant is your boss - do they just want a few graphs with nice colors?).  But, this will give someone a really quick intro to Splunk without having to do anything other than install (and then maybe they will be ready to tackle a broader introduction, like the Search Tutorial)
Yes, field1, field2, x,y,z,a,b,c are all from the same set of events and are all non-null, and in general, we might have other groupbys besides xyz and abc -- in one of my frequent use cases I have t... See more...
Yes, field1, field2, x,y,z,a,b,c are all from the same set of events and are all non-null, and in general, we might have other groupbys besides xyz and abc -- in one of my frequent use cases I have three: x, xy, and xyz, for instance (say, when I want to calculate statistics with different levels of granularity -- e.g. percentile response times by hour, or hour-IP, or hour-IP-server ).  I guess the question is rather more of a data-engineering problem rather than an analytics one: regardless of if we want two tables or one, how do we generate the data in a fast way? As it happens, doing two or more separate searches is significantly slower than, say, running one and doing some fancy stats magic on it, even  if it's more complicated.  Also just out of curiosity, what do we mean by normalized tables here?   
You didn't run addinfo in the chain search as suggested.  Of course that will cause error because that's the same as  division by zero.