All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm seeing errors such as:    Corrupt csv header in CSV file , 2 columns with the same name '' (col #12 and #8, #12 will be ignored) but there I can't find the reference to what CSV file that is ca... See more...
I'm seeing errors such as:    Corrupt csv header in CSV file , 2 columns with the same name '' (col #12 and #8, #12 will be ignored) but there I can't find the reference to what CSV file that is causing this error.  Does anyone have any guidance on how to find the offending CSV file?
Please don't spam the same question in multiple places. As to the your question - check the original event which triggers your notable and see if your event is not truncated. If it is you might need... See more...
Please don't spam the same question in multiple places. As to the your question - check the original event which triggers your notable and see if your event is not truncated. If it is you might need to tweak your ingestion parameters so that longer part of the event is retained.
Ugh. That's bad. While @richgalloway 's solution should work (you can try to be even more explicit with more precise definition of the timestamp format for linebreaking you'll be getting some ugly tr... See more...
Ugh. That's bad. While @richgalloway 's solution should work (you can try to be even more explicit with more precise definition of the timestamp format for linebreaking you'll be getting some ugly trailers to some of your events. Also since these are contents of a json field, some characters will most probably be escaped. It would be best if you managed to: 1) Work with the source side so that you get your event in a more reasonable way (without all this json overhead) - preferred option 2) If you can't do that, use a pre-processing step in form of an external script/tool/whatever which will "unpack" those jsons and just leave you with raw data.
There are many threads about migrating environments in different scenarios. Use the search General idea is that your environment should be consistent in terms of used OS and its version but there... See more...
There are many threads about migrating environments in different scenarios. Use the search General idea is that your environment should be consistent in terms of used OS and its version but there is no explicit requirement that SH tier must be on the same OS distribution as the indexer-tier (although it is of course best to have a relatively homogenous environment for maintenance reasons) or that the DS must be on the same OS as SHs.  
Regardless of whether you mean the loadjob as some form of batch ingesting events or an actual invocation of Splunk's loadjob command, the typical approach to filtering events by the contents of a lo... See more...
Regardless of whether you mean the loadjob as some form of batch ingesting events or an actual invocation of Splunk's loadjob command, the typical approach to filtering events by the contents of a lookup is to use a lookup to assign a field value and then filter on that value. This way you'll get only those events that do have wanted values. Keep in mind thought that: 1) You still need to read all matching "before lookup" events so if you're filtering to a very small subset of events, another approach might be better. 2) If your lookup is big, indeed moving to KVstore can be the thing to do. Anyway, this is the approach: <your initial search> | lookup mylookup.csv lookupfield AS eventfield OUTPUT lookupfield AS somefieldwewanttofilterby | where isnotnull(somefieldwewanttofilterby)
That's true, but not entirely true. Things that are needed for the initial phase of search should be replicated to search peers as a so called "knowledge bundle". Otherwise search peers couldn't - f... See more...
That's true, but not entirely true. Things that are needed for the initial phase of search should be replicated to search peers as a so called "knowledge bundle". Otherwise search peers couldn't - for example - extract fields and search for those fields within the events since TAs are typically installed on SHs if they only contain search-time settings. So there are things that are pushed from the SH tier to the indexer tier (I'm not sure how it works with federated search; never tested it). So generally, yes - your search peers should receive the knowledge bundle from the SH. You should have subdirectories in $SPLUNK_HOME/var/run/searchpeers/ on your indexers containing knowledge bundle (some subset of etc/system, etc/apps and etc/users). EDIT: But this will be a subset of the contents of those directories so Splunk might decide that some of the settings are not used at all in the indexer tier so will not be replicated (for example I could expect alert_actions.conf not being pushed as part of the knowledge bundle since an alert action will not be fired on an indexer, it will be run on a SH).
As @gcusello already pointed out, your idea of configuring the input was wrong but let me add my three cents to this. 1) I'm not sure about fortigate logs but generally, if you have RFC-compliant sy... See more...
As @gcusello already pointed out, your idea of configuring the input was wrong but let me add my three cents to this. 1) I'm not sure about fortigate logs but generally, if you have RFC-compliant syslogs TAs do extract the host entry from the event itself so the field value assigned by the input is overwritten during ingestion process 2) It's not a very good idea to read syslog events directly on a forwarder. For various reasons - performance, manageability, lack of network-level metadata. It's better to use an intermediate syslog daemon either sending to a HEC input or at least writing to files and reading those files with the forwarder. There are various options here. Most notably SC4S. EDIT: 3) Oh, and you definitely don't want to set the sourcetype to "firewall_logs". If you're using a TA for Fortigate, use the proper sourcetype for this data as specified in the TA's docs.
First and foremost - what data do you have in your Splunk?
Do you have ony two possible bag types? Generally that's possible but the question is how to do it most effectively/elegantly. Because the obvious thing would be to do stats by each date/airline and ... See more...
Do you have ony two possible bag types? Generally that's possible but the question is how to do it most effectively/elegantly. Because the obvious thing would be to do stats by each date/airline and then fillnull or eval with coalesce but the question is whether that's enough to get results as "date, airline, bags local, bags transfered" or do you need to split it back to separate rows.
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_saf... See more...
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"1qaz@WSX#EDC"} Because password is sensitive information, I do 6-digits mask before indexing. In addition, I need to check if the password meets the complexity, for example, the password should be at least 8 characters long and must include at least three of the following: numbers, uppercase letters, lowercase letters, and special characters. So the indexed data should be: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"******SX#EDC","is_password_meet_complexity":"Yes"} I already mask the password with SEDCMD like this: [json_sourcetype] SEDCMD-password = s/\"password\"\:\s+\"\S{6}([^ ]*)/"password":"******\1/g But I have no idea how to extract the complexity metadata of password field before indexing ( add "is_password_meet_complexity" field to log), should I use ingest time eval? Your support in this is highly appreciated.      
Giuseppe, aren't you confused with the Deployer SH? Because the Deployer behaves as you said Thanks & bye.
Thanks, i know how DS<>UF works   So, Is there a way to tell DS: maintain ONLY addon#1 + addon#2 + addon#3 and DELETE ALL OTHER CUSTOM ADDONS (addon#4 in this example)?   THE ANSWER IS: NO!!!... See more...
Thanks, i know how DS<>UF works   So, Is there a way to tell DS: maintain ONLY addon#1 + addon#2 + addon#3 and DELETE ALL OTHER CUSTOM ADDONS (addon#4 in this example)?   THE ANSWER IS: NO!!!  
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 ... See more...
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 Whenever a Bag Type is missing for certain Airline (in above case Transfer data is missing for 03/05/2024 IX). I need to create a manual row entry with value as 0 (Total Processed = 0) Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 03/05/2024 IX Transfer 0
It doesn't work that way. 1. DS doesn't manage anything. The DC (deployment client - typically a forwarder, but you can use DS to configure other components) calls the DS and asks for curremt versio... See more...
It doesn't work that way. 1. DS doesn't manage anything. The DC (deployment client - typically a forwarder, but you can use DS to configure other components) calls the DS and asks for curremt versions of the apps that DS thinks the DC should have. 2. The DC compares the checksum for each app it got from the DS with a checksum of the app it has locally. If it differs, the DC removes local app and unpacks the app downloaded from the DS. (Or removes an app if the app is explicitly configured to be removed as far as I remember but I'm not 100% sure here) And that's pretty much there is to it. So there is no way to manage apps which are not explicitly configured. But even if you tried doing so with ugly hacks like spawning a script from an input which would scan all aps on a DC and remove all but whitelisted ones, remember that there are default apps in etc/apps which are installed during the component installation and upgraded with it. And you don't want to mess with them. So: 1) No EDIT: Interesting, I'm pretty sure I've typed in more than just that "no" above. But apparently only this made it to the answer. I have no idea what happened.
Maybe some DS conf to change? I'll see. For now, as said, i prefer to maintain custom users addons in fact, I would have problems with several users Only wanted to know if there was a way t... See more...
Maybe some DS conf to change? I'll see. For now, as said, i prefer to maintain custom users addons in fact, I would have problems with several users Only wanted to know if there was a way to do the opposite Thanks
Hi @verbal_666, it's really strange, because I experienced the opposite behavior: the DS removed all the apps non managed by itself but I don't remember the version. Anyway, open a case To Splunk S... See more...
Hi @verbal_666, it's really strange, because I experienced the opposite behavior: the DS removed all the apps non managed by itself but I don't remember the version. Anyway, open a case To Splunk Support. Ciao. Giuseppe
/etc/apps of UF +++my addons deployed by DS Check_System Ethernet-Speed GET_ALL maxKBps output +++custom addons created on UF (still there) GET_ALL_FAKE_IDX LOCAL +++internal SplunkUnivers... See more...
/etc/apps of UF +++my addons deployed by DS Check_System Ethernet-Speed GET_ALL maxKBps output +++custom addons created on UF (still there) GET_ALL_FAKE_IDX LOCAL +++internal SplunkUniversalForwarder introspection_generator_addon journald_input learned search splunk_httpinput splunk_internal_metrics   INFO DC:HandshakeReplyHandler [1815 HttpClientPollingThread_A48B7A13-D8C3-4DBB-ADAD-5F1F80E30A12] - Handshake done.    
I say,  i prefer this behaviour, since sometimes it's useful to insert addons manually, outside DS, but i wanted to know if it was possible, in fact, the opposite, with changes to the DS! I confirm ... See more...
I say,  i prefer this behaviour, since sometimes it's useful to insert addons manually, outside DS, but i wanted to know if it was possible, in fact, the opposite, with changes to the DS! I confirm custom addons remains on my UFs
It's not so On a 8.2.x Infra, users addons on a UF, controlled by DS, remains on UF. I also check on another TEST INFRASTRUCTURE, and custom addons remains inside /etc/apps of UF, controlled by ... See more...
It's not so On a 8.2.x Infra, users addons on a UF, controlled by DS, remains on UF. I also check on another TEST INFRASTRUCTURE, and custom addons remains inside /etc/apps of UF, controlled by DS. UF did handshake on DS.
Hi @vmadala , a stand-alone Search Head doesn't replicate any app to Search Peers. A SH replicates apps only to other SHs ony if they are clustered in a Search Head Cluster. Apps on Indexers are d... See more...
Hi @vmadala , a stand-alone Search Head doesn't replicate any app to Search Peers. A SH replicates apps only to other SHs ony if they are clustered in a Search Head Cluster. Apps on Indexers are deployed by the Cluster Manager (in an Indexer Cluster), manually or by Deployment Server in not clustered Indexers. Ciao. Giuseppe