All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's true, but not entirely true. Things that are needed for the initial phase of search should be replicated to search peers as a so called "knowledge bundle". Otherwise search peers couldn't - f... See more...
That's true, but not entirely true. Things that are needed for the initial phase of search should be replicated to search peers as a so called "knowledge bundle". Otherwise search peers couldn't - for example - extract fields and search for those fields within the events since TAs are typically installed on SHs if they only contain search-time settings. So there are things that are pushed from the SH tier to the indexer tier (I'm not sure how it works with federated search; never tested it). So generally, yes - your search peers should receive the knowledge bundle from the SH. You should have subdirectories in $SPLUNK_HOME/var/run/searchpeers/ on your indexers containing knowledge bundle (some subset of etc/system, etc/apps and etc/users). EDIT: But this will be a subset of the contents of those directories so Splunk might decide that some of the settings are not used at all in the indexer tier so will not be replicated (for example I could expect alert_actions.conf not being pushed as part of the knowledge bundle since an alert action will not be fired on an indexer, it will be run on a SH).
As @gcusello already pointed out, your idea of configuring the input was wrong but let me add my three cents to this. 1) I'm not sure about fortigate logs but generally, if you have RFC-compliant sy... See more...
As @gcusello already pointed out, your idea of configuring the input was wrong but let me add my three cents to this. 1) I'm not sure about fortigate logs but generally, if you have RFC-compliant syslogs TAs do extract the host entry from the event itself so the field value assigned by the input is overwritten during ingestion process 2) It's not a very good idea to read syslog events directly on a forwarder. For various reasons - performance, manageability, lack of network-level metadata. It's better to use an intermediate syslog daemon either sending to a HEC input or at least writing to files and reading those files with the forwarder. There are various options here. Most notably SC4S. EDIT: 3) Oh, and you definitely don't want to set the sourcetype to "firewall_logs". If you're using a TA for Fortigate, use the proper sourcetype for this data as specified in the TA's docs.
First and foremost - what data do you have in your Splunk?
Do you have ony two possible bag types? Generally that's possible but the question is how to do it most effectively/elegantly. Because the obvious thing would be to do stats by each date/airline and ... See more...
Do you have ony two possible bag types? Generally that's possible but the question is how to do it most effectively/elegantly. Because the obvious thing would be to do stats by each date/airline and then fillnull or eval with coalesce but the question is whether that's enough to get results as "date, airline, bags local, bags transfered" or do you need to split it back to separate rows.
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_saf... See more...
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"1qaz@WSX#EDC"} Because password is sensitive information, I do 6-digits mask before indexing. In addition, I need to check if the password meets the complexity, for example, the password should be at least 8 characters long and must include at least three of the following: numbers, uppercase letters, lowercase letters, and special characters. So the indexed data should be: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"******SX#EDC","is_password_meet_complexity":"Yes"} I already mask the password with SEDCMD like this: [json_sourcetype] SEDCMD-password = s/\"password\"\:\s+\"\S{6}([^ ]*)/"password":"******\1/g But I have no idea how to extract the complexity metadata of password field before indexing ( add "is_password_meet_complexity" field to log), should I use ingest time eval? Your support in this is highly appreciated.      
Giuseppe, aren't you confused with the Deployer SH? Because the Deployer behaves as you said Thanks & bye.
Thanks, i know how DS<>UF works   So, Is there a way to tell DS: maintain ONLY addon#1 + addon#2 + addon#3 and DELETE ALL OTHER CUSTOM ADDONS (addon#4 in this example)?   THE ANSWER IS: NO!!!... See more...
Thanks, i know how DS<>UF works   So, Is there a way to tell DS: maintain ONLY addon#1 + addon#2 + addon#3 and DELETE ALL OTHER CUSTOM ADDONS (addon#4 in this example)?   THE ANSWER IS: NO!!!  
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 ... See more...
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 Whenever a Bag Type is missing for certain Airline (in above case Transfer data is missing for 03/05/2024 IX). I need to create a manual row entry with value as 0 (Total Processed = 0) Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 03/05/2024 IX Transfer 0
It doesn't work that way. 1. DS doesn't manage anything. The DC (deployment client - typically a forwarder, but you can use DS to configure other components) calls the DS and asks for curremt versio... See more...
It doesn't work that way. 1. DS doesn't manage anything. The DC (deployment client - typically a forwarder, but you can use DS to configure other components) calls the DS and asks for curremt versions of the apps that DS thinks the DC should have. 2. The DC compares the checksum for each app it got from the DS with a checksum of the app it has locally. If it differs, the DC removes local app and unpacks the app downloaded from the DS. (Or removes an app if the app is explicitly configured to be removed as far as I remember but I'm not 100% sure here) And that's pretty much there is to it. So there is no way to manage apps which are not explicitly configured. But even if you tried doing so with ugly hacks like spawning a script from an input which would scan all aps on a DC and remove all but whitelisted ones, remember that there are default apps in etc/apps which are installed during the component installation and upgraded with it. And you don't want to mess with them. So: 1) No EDIT: Interesting, I'm pretty sure I've typed in more than just that "no" above. But apparently only this made it to the answer. I have no idea what happened.
Maybe some DS conf to change? I'll see. For now, as said, i prefer to maintain custom users addons in fact, I would have problems with several users Only wanted to know if there was a way t... See more...
Maybe some DS conf to change? I'll see. For now, as said, i prefer to maintain custom users addons in fact, I would have problems with several users Only wanted to know if there was a way to do the opposite Thanks
Hi @verbal_666, it's really strange, because I experienced the opposite behavior: the DS removed all the apps non managed by itself but I don't remember the version. Anyway, open a case To Splunk S... See more...
Hi @verbal_666, it's really strange, because I experienced the opposite behavior: the DS removed all the apps non managed by itself but I don't remember the version. Anyway, open a case To Splunk Support. Ciao. Giuseppe
/etc/apps of UF +++my addons deployed by DS Check_System Ethernet-Speed GET_ALL maxKBps output +++custom addons created on UF (still there) GET_ALL_FAKE_IDX LOCAL +++internal SplunkUnivers... See more...
/etc/apps of UF +++my addons deployed by DS Check_System Ethernet-Speed GET_ALL maxKBps output +++custom addons created on UF (still there) GET_ALL_FAKE_IDX LOCAL +++internal SplunkUniversalForwarder introspection_generator_addon journald_input learned search splunk_httpinput splunk_internal_metrics   INFO DC:HandshakeReplyHandler [1815 HttpClientPollingThread_A48B7A13-D8C3-4DBB-ADAD-5F1F80E30A12] - Handshake done.    
I say,  i prefer this behaviour, since sometimes it's useful to insert addons manually, outside DS, but i wanted to know if it was possible, in fact, the opposite, with changes to the DS! I confirm ... See more...
I say,  i prefer this behaviour, since sometimes it's useful to insert addons manually, outside DS, but i wanted to know if it was possible, in fact, the opposite, with changes to the DS! I confirm custom addons remains on my UFs
It's not so On a 8.2.x Infra, users addons on a UF, controlled by DS, remains on UF. I also check on another TEST INFRASTRUCTURE, and custom addons remains inside /etc/apps of UF, controlled by ... See more...
It's not so On a 8.2.x Infra, users addons on a UF, controlled by DS, remains on UF. I also check on another TEST INFRASTRUCTURE, and custom addons remains inside /etc/apps of UF, controlled by DS. UF did handshake on DS.
Hi @vmadala , a stand-alone Search Head doesn't replicate any app to Search Peers. A SH replicates apps only to other SHs ony if they are clustered in a Search Head Cluster. Apps on Indexers are d... See more...
Hi @vmadala , a stand-alone Search Head doesn't replicate any app to Search Peers. A SH replicates apps only to other SHs ony if they are clustered in a Search Head Cluster. Apps on Indexers are deployed by the Cluster Manager (in an Indexer Cluster), manually or by Deployment Server in not clustered Indexers. Ciao. Giuseppe
Hi @verbal_666 , if a user (also root) directy adds an apps (or add-on) on a UF managed by a Depoyment Server, at the first check on UF configurations by the DS, the not managed app is removed by th... See more...
Hi @verbal_666 , if a user (also root) directy adds an apps (or add-on) on a UF managed by a Depoyment Server, at the first check on UF configurations by the DS, the not managed app is removed by the UF. Ciao. Giuseppe
Hi @whitecat001 , you could try with something like this: index=your_index | stats latest(_time) AS _time BY Account_name if you don't like to use the _time field, but you want to rename it, remem... See more...
Hi @whitecat001 , you could try with something like this: index=your_index | stats latest(_time) AS _time BY Account_name if you don't like to use the _time field, but you want to rename it, remember that _time is in epochtime and that's automaticay displayed in Human readable, if you rename, you have aso to convert in Human Readable format. index=your_index | stats latest(_time) AS latest BY Account_name | eval latest=strftime(latest),"%Y-%m-%d %H:%M:%S") Ciao. Giuseppe
Hi @AtherAD , the connection_host parametes is useful to define the way to associate the host (ip or dns), youcannot use it to assign an host. In addition, you cannot assign multiple hostnames to a... See more...
Hi @AtherAD , the connection_host parametes is useful to define the way to associate the host (ip or dns), youcannot use it to assign an host. In addition, you cannot assign multiple hostnames to an input but only one at a time (eventually using host, not connection_host). You could try to use the connection_host parameter in your input as described at https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Inputsconf#UDP_.28User_Datagram_Protocol_network_input.29 : connection_host = [ip|dns|none] * "ip" sets the host to the IP address of the system sending the data. * "dns" sets the host to the reverse DNS entry for IP address of the system that sends the data. For this to work correctly, set the forward DNS lookup to match the reverse DNS lookup in your DNS configuration. * "none" leaves the host as specified in inputs.conf, typically the Splunk system hostname. * If the input is configured with a 'sourcetype' that has a transform that overrides the 'host' field e.g. 'sourcetype=syslog', that takes precedence over the host specified here. * Default: ip  in your case: [udp://514} sourcetype = firewall_logs connection_host = dns disabled = 0 acceptFrom = 192.168.1.*, 192.168.1.* Ciao. Giuseppe  
Hi. QUESTION: is there a method/configuration to fully align a UF with the Deployment Server? Let me explain: DS ServerX has 3 addons configured, addon#1 + addon#2 + addon#3 UF on ServerX Recei... See more...
Hi. QUESTION: is there a method/configuration to fully align a UF with the Deployment Server? Let me explain: DS ServerX has 3 addons configured, addon#1 + addon#2 + addon#3 UF on ServerX Receives perfectly addon#1 + addon#2 + addon#3 Now, a user enter root in ServerX and create his own custom addon inside UF, addon#4. Now ServerX has addon#1 + addon#2 + addon#3 (DS) + addon#4 (custom created by user) Is there a way to tell DS: maintain ONLY addon#1 + addon#2 + addon#3 and DELETE ALL OTHER CUSTOM ADDONS (addon#4 in this example)? Thanks.
Hello, I have created a new role but i noticed that the users who i have assigned that role get an "error occurred while rendering the page template" When they click the fields option under knowledg... See more...
Hello, I have created a new role but i noticed that the users who i have assigned that role get an "error occurred while rendering the page template" When they click the fields option under knowledge. I looked at the capabilities but cant seem to find the right one that provides access to fields.