All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Many thanks for your answer gcusello. If I deploy Multi site cluster architecture, would it be possible to have search heads clustering?
Hi @jip31, are you sure that data are in your datamodel? test this using pivot. Ciao. Giuseppe
Hello    The Splunkd Services are not working after starting/restarting the services and it is getting stopped, I have tried several times. So, could you please help me to sort it out from this iss... See more...
Hello    The Splunkd Services are not working after starting/restarting the services and it is getting stopped, I have tried several times. So, could you please help me to sort it out from this issue. Thanks in advance.
Hi @Leon88, you have to use a regex to extract this field, something like this: index=your_index | rex "\<ResponseID\>(?<ResponseID>[^\<]*)" | table _time ResponseID that you can test at https://r... See more...
Hi @Leon88, you have to use a regex to extract this field, something like this: index=your_index | rex "\<ResponseID\>(?<ResponseID>[^\<]*)" | table _time ResponseID that you can test at https://regex101.com/r/Sj8hDe/1 Ciao. Giuseppe
Hi Gcusello I can see the fields extracted in my datamodel And even if i use your search below I have no results | tstats count from datamodel=TEST where TEST.EventCode=100   I have only res... See more...
Hi Gcusello I can see the fields extracted in my datamodel And even if i use your search below I have no results | tstats count from datamodel=TEST where TEST.EventCode=100   I have only results heriteld fields like host, sourcetype, source    
It's not working..
Hi @maede_yavari, multisite architecture is required only if you need Disaster Recovery, otherwise, you can have a single site Indexer Cluster even if servers are in more than one site, even if a mu... See more...
Hi @maede_yavari, multisite architecture is required only if you need Disaster Recovery, otherwise, you can have a single site Indexer Cluster even if servers are in more than one site, even if a multisite cluster, setting Search Affinity, permits to your SHs to search in the local Indexers instead in all the Indexers. About Search Heads, a Search Head Cluster gives you knowledge objects replication, but you can also have stand alone SHs that access the Indexer Cluster. Anyway, don't use different clusters for different scopes, you will be crazy in logs separation and you'll surely have duplication of data because there are logs that must be used for more than one purpose. Data replication, can be configured and anyway grants you more safe in case of fault. Ciao. Giuseppe
Thanks for your reply. Splunk Architect recommend multi site architecture. but in the multi site architecture , I need to replicate data between sites to search them by search heads. also as I know ... See more...
Thanks for your reply. Splunk Architect recommend multi site architecture. but in the multi site architecture , I need to replicate data between sites to search them by search heads. also as I know we can not cluster search heads together in multi site architecture, because each site needs its own search head. Actually permission is not my concern. I want to decrease replication load and bandwidth usage by separate indexes.
Hi all, I have a case about monitoring Linux servers. Here what i am trying to do. I am not sure this is possible or not but i have to do these things with possibilities because System Staff wanted ... See more...
Hi all, I have a case about monitoring Linux servers. Here what i am trying to do. I am not sure this is possible or not but i have to do these things with possibilities because System Staff wanted these from me. 1-Root SSH access enabled servers --> Need Help 2-When someone changed sudoers file --> Done. 3-Root password change --> Done. 4-Users who have "0" ID except root --> Need Help   I did some steps but i need help about 2 step. Any help would be appreciated!
hi all, is there a way to demote a case to a container using a playbook?   thank you in advance
Is there a built-in solution in splunk that does the frequency analysis (for ex. on domain names) ? There is a solution by Mark Baggett in https://github.com/MarkBaggett/freq but I had problems usin... See more...
Is there a built-in solution in splunk that does the frequency analysis (for ex. on domain names) ? There is a solution by Mark Baggett in https://github.com/MarkBaggett/freq but I had problems using it in splunk. It either can be run using the python script: $ python3 freq.py freqtable2018.freq -m splunk.com (6.0006, 5.0954) Or using curl: $ curl http://127.0.0.1:20304/measure/splunk.com (6.0006, 5.0954) I want to run it against a field for ex. called "query" in my zeek dns logs and calculate the frequency and save it in another field 
Hi @RSS_STT , please try this: | rex "\"CI\":\s+\"(?<CI_V2>[^;]*);(?<CI_1>[^;\"]*);(?<CI_2>[^;\"]*);(?<CI_3>[^;\"]*);(?<CI_4>[^;\"]*);(?<CI_5>[^\"]*)" Ciao. Giuseppe
Hello Team, help me with splunk query to trigger: 1-Bruteforce attacks, 2- malicious payloads and 3- zeroday exploits by creating , Splunk query and create email Alerts for it? Thank you
Hi @mukhan1, ok, perform also the check I hinted to verify connection because telnet is important but it isn't the only check to perform: you could have an open connectin but you could not correctly... See more...
Hi @mukhan1, ok, perform also the check I hinted to verify connection because telnet is important but it isn't the only check to perform: you could have an open connectin but you could not correctly configure outputs.conf in your Forwarder! let me know if you solved or if I can help you more. Ciao. Giuseppe P.S.: Karma Points are appreciated
Hi @maede_yavari, your architecture has no sense: you can have a very performant architecture with HA and you want to divide it, why? My hint is to engage a Certified Splunk Architect to design you... See more...
Hi @maede_yavari, your architecture has no sense: you can have a very performant architecture with HA and you want to divide it, why? My hint is to engage a Certified Splunk Architect to design your architecture. You can separate accesses to data using different indexers in the Cluster giving different permissions top them. In this way you have a linear infrastructure with one Cluster mstr that manage all the Indexers and a Search Head (eventually clustered!) that access all the indexes in all the Indexers. Then you can separate access to data creating different roles to access security indexes or IT Operation indexes. Ciao. Giuseppe
I have a below message. how can I only display ResponseID in output? thanks message: <?xml version='1.0' encoding='ISO-8859-1'?><Submission Id="12345" <LastName>XXX</LastName><ResponseID>137ce83fe8d... See more...
I have a below message. how can I only display ResponseID in output? thanks message: <?xml version='1.0' encoding='ISO-8859-1'?><Submission Id="12345" <LastName>XXX</LastName><ResponseID>137ce83fe8ddb052-1698535326634</ResponseID><Date>2023.10.28 23:23:14</Date>
Ok. We need to get the terminology straight. 1. There is no such thing as "summary index" as a type of index. Splunk has only two types of indexes - events and metrics. You can have a summary index ... See more...
Ok. We need to get the terminology straight. 1. There is no such thing as "summary index" as a type of index. Splunk has only two types of indexes - events and metrics. You can have a summary index as an index which gets your summaries but it's purely organizational issue. 1a. You can have have both summary events and any other kind of events in the same index. 2. There is summary indexing meaning a process in which you generate data which is saved into your indexes for summarizing purposes. 3. There is no such thing as commands in the index. Searches can read from an index and write to them but they are not in an index. So you're either using collect explicitly or it's done implicitly as a result of summary indexing option in scheduled search. 4. Indexes just hold data. They don't do anything with it. The data is either permanently transformed before being written to the index (that's what happens when the data is collected to the summary index) or is being dynamically transformed on read according to sourcetype/source/host definition (in your case it's the definition for the stash sourcetype). Index has nothing to do with it. The summary indexing in the scheduled search works the same way as the collect command does - the results are getting written to an intermediate csv file from which they are ingested into the destination index. But here you can't decide on the details as you can do with the manually spawned collect command. So either fiddle with the configuration described in the article I linked (might work, might not; haven't tried it myself), manually split the results on search (but that might be problematic if you have spaces in your field values; in such case you could try to delimit multivalued fields differently before collecting) or split your events so that you don't have multivalued fields before collecting the summaries.
Hello, we have a data center with several type of equipment such as servers, switches, routers, EDR, some IOT Sensors, virtualization and etc. Based on EPS, we need about 10 indexer based on splu... See more...
Hello, we have a data center with several type of equipment such as servers, switches, routers, EDR, some IOT Sensors, virtualization and etc. Based on EPS, we need about 10 indexer based on splunk recommendation. Now I want to  separate indexer to 4 cluster. one for servers, one for network device, one for services and last one for security such as Firewall and EDR.  each cluster has several indexer and each forwarder send data to the related cluster. data only replicate in the origin cluster not other clusters But I need each search head could search between 4 cluster. for example search for login failure in the all cluster (servers, network device and etc) could I have several cluster with one cluster master?   Best Regards
If you know all container names in advance, simply enumerate them.  One way to do this is to use foreach.   index=* Initialised xxxxxxxxxxxx xxxxxx |rex "\{consumerName\=\'(MY REGEX)" | stats coun... See more...
If you know all container names in advance, simply enumerate them.  One way to do this is to use foreach.   index=* Initialised xxxxxxxxxxxx xxxxxx |rex "\{consumerName\=\'(MY REGEX)" | stats count as Connections by Container_Name | transpose header_field=Container_Name column_name=Container_Name | foreach "Container A", "Container B", "Container C", "Container D" [eval <<FIELD>> = if(isnull('<<FIELD>>'), "(missing)", '<<FIELD>>')] | transpose header_field=Container_Name column_name=Container_Name | addcoltotals fieldname=Connections labelfield=Container_Name   (If you perform stats on Container_Name,  For example, if your data is missing "Container D", you get Container_Name Connections Container A 1 Container B 1 Container C 1 Container D (missing) Total 3 If your data is missing "Container C", you get Container_Name Connections Container A 1 Container B 1 Container D 1 Container C (missing) Total 3 And so on. Here is an emulation for you to play with and compare with real data   | makeresults | fields - _time | eval Container_Name = mvappend("Container A", "Container B"```, "Container C"```, "Container D") ``` data emulation above ```    
@gcusello thanks for your reply, i have checked the connection by telnet the Splunk it is successfully connected, also cross checked it by adding other path of log files. It is adding successfully.  ... See more...
@gcusello thanks for your reply, i have checked the connection by telnet the Splunk it is successfully connected, also cross checked it by adding other path of log files. It is adding successfully.  I have added the file path manually but still file is not showing on splunk GUI. Further going through the doc you provided hope it will help.