All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a simply Splunk set-up.  about 120 or so Linux servers (that are all basically appliances) w/ universal forwarder installed, and a single Linux server running Splunk Enterprise acting as the i... See more...
I have a simply Splunk set-up.  about 120 or so Linux servers (that are all basically appliances) w/ universal forwarder installed, and a single Linux server running Splunk Enterprise acting as the indexer, search head, etc.   The problem I have is that the forwarders must feed the server's audit log into Splunk.  That feed is actually working fine, but it's flooding the server, and causing me to go over my license limit.   Specifically, the appliance app has an event in cron that runs very often, and it's flooding the audit log with file access, file mod, etc events, which is ballooning the amount of data I send to Splunk Enterprise.  Data that IO simply do not need.   What I want to do is filter out these specific events, but ONLY for this specific user.  I believe this can be done using transforms.conf and props.conf  on the indexer, but I'm having trouble getting the syntax and fields right. Can anyone assist with this? Here's the data I need to remove...   sourcetype=auditd acct=appuser exe=/usr/sbin/crond exe=/usr/bin/crontab   So basically ANY events in the audit log for user "appuser" that reference either "/usr/bin/crontab" or "usr/bin/crontab" need to be dropped.   Here are 2 examples of the events I want to drop. type=USER_END msg=audit(03/04/2024 15:58:02.701:5726) : pid=26919 uid=root auid=appuser ses=184 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct=appuser exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success' type=USER_ACCT msg=audit(03/04/2024 15:58:02.488:5723) : pid=26947 uid=appuser auid=appuser ses=184 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_localuser acct=appuser exe=/usr/bin/crontab hostname=? addr=? terminal=cron res=success' Can this be done?
The SqlServer module is available for PowerShell Core on various platforms. I can test a solution on Linux x86-64, but I don't have access to a macOS or ARM host. What platform is Server C?
hi i would like some help doing an eval function where based on 3 values of fields will determine if the eval field value be either OK or BAD   example these are the 4 fields in total (hostname, ... See more...
hi i would like some help doing an eval function where based on 3 values of fields will determine if the eval field value be either OK or BAD   example these are the 4 fields in total (hostname, "chassis ready", result, synchronize) hostname= alpha "chassis ready"=yes result=pass synchronize=no hostname= beta "chassis ready"=yes result=pass synchronize=yes hostname= charlie "chassis ready"=no result=pass synchronize=yes   i would like to do an eval for 'overallpass' where if   ("chassis ready"=yes    result=pass   synchronize=yes) will make 'overallpass' = OK and everything else will be overallpass = 'Not Okay" by hostname   so based on the top table,  here is the final output. ************************************* Hostname         overallpass alpha                   Not Okay bravo                   OK charlie                 Not Okay  
A custom segmenter has merit in this case, but globally, folks will recommend tagging events with an appropriate add-on (or custom configuration) and using an accelerated Web or other data model to f... See more...
A custom segmenter has merit in this case, but globally, folks will recommend tagging events with an appropriate add-on (or custom configuration) and using an accelerated Web or other data model to find matching URLs, hostnames, etc.
Hello @tscroggins    Thank you for your follow up. However, I am not sure that we can apply this solution to our environment since our Server C (the one with the universal forwarder) is not a Windo... See more...
Hello @tscroggins    Thank you for your follow up. However, I am not sure that we can apply this solution to our environment since our Server C (the one with the universal forwarder) is not a Windows server . The other servers (type A and B) are not connected to any external networks. After some internal discussions, a new idea was proposed and I am not sure if it could work, @tscroggins , @isoutamo , @scelikok I appreciate if you share your feed backs about it. The idea is to deploy a heavy forwarder on server B and a Universal forwarder on server C (the one connected to Splunk cloud) Servers A (DB server): Our databases generate SQLAudit files (and probably some Oracle DB audit files from similar servers). No external connections are allowed to these category of servers. Server B (Relay): This is the only server that can establish communications with the DB servers (category A). On this server, we can install a havy forwarder + DB connect to collect MSsql audit logs ( and Oracle audit logs from oracle servers). Please note that there are no external connections to this server and it cannot directly forwarder to Splunk Cloud. Server C (Universal Forwarder): The only one with allowed external connections. Havy forwarder on server B sends the collected logs to the universal forwarder on server C.  The Universal forwarder then uploads the SQLAudit files and oracle audit files to the Splunk Cloud instance.     Do you think that this is a feasable set up ?  Best regards,  
Hi @AJ_splunk1, just a heads-up that https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers states that " In particular, there is a new system-generated app, etc/... See more...
Hi @AJ_splunk1, just a heads-up that https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers states that " In particular, there is a new system-generated app, etc/apps/SplunkDeploymentServerConfig, which contains configuration files necessary to the proper functioning of the deployment server. Do not alter this directory or its files in any way." I chose to implement the 9.2 fix (also shown on the page link above) as a separate app in \etc\apps on the ds and just called something like "Fix_DSClientList" so it's more obvious there's a modification in place. I hope that helps.
nvm figured it out. It was the output.conf in this app - etc/apps/SplunkDeploymentServerConfig. Documentation is a bit confusing for this
Are you adding additional ip / name to your host or just switch ip-name mapping? 
UDP is not reliable protocol by design! With it you will always lost events. Only question is, how much? Answer is unknown amount.
Splunk 9.2.x has some new features/ framework for DS. That may helps in your kind of environments or not? But as it’s a x.x.0 version I don’t put it into production yet!
Does this work with splunk cloud as well? WE have splunk onprem deployment server, indexers are all in the cloud and experiencing the same where clients are not showing up after an update to 9.2.x. T... See more...
Does this work with splunk cloud as well? WE have splunk onprem deployment server, indexers are all in the cloud and experiencing the same where clients are not showing up after an update to 9.2.x. They are phoning home however as per the logs
Hi,  I have created the dashboard with multiple panels. I have created the time range panel to be reflected as last 4 hours from my end . it is perfectly working. When customer is trying to use... See more...
Hi,  I have created the dashboard with multiple panels. I have created the time range panel to be reflected as last 4 hours from my end . it is perfectly working. When customer is trying to use the dashboard it is showing as TimeRangepanel not found. Please assist me
It comes down to whether Kafka can transport UDP and TCP reliably from all the devices to the various Splunk clusters.
I have a query where I am counting the PASS and fail and displaying it as a pie-chart.Also I modified the search so that it displays the count and status .When the status field which has pass and fai... See more...
I have a query where I am counting the PASS and fail and displaying it as a pie-chart.Also I modified the search so that it displays the count and status .When the status field which has pass and fail values the pie chart displays green for pass and red for  fail as expected but when there is only pass it displays red not green Attached is the screenshot       <chart> <search> <query>index="abc" | rex field=source "ame\/(?&lt;Type&gt;[^\/]+)" |search Type=$tok_type$ | rex field=_raw "(?i)^[^ ]* (?P&lt;status&gt;.+)" | stats latest(status) as status by host | stats count by status | eval chart = count + " " + status | fields chart, count</query> <earliest>$tok_time.earliest$</earliest> <latest>$tok_time.latest$</latest> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.legend.labels">[FAIL,PASS]</option> <option name="charting.seriesColors">[#BA0F30,#116530]</option> <option name="refresh.display">progressbar</option> <option name="charting.chart.showPercent">true</option> </chart>         Thanks in Advance 
I am having a random issue where it seems characters are present in a field which cannot be seen. If you look in the results below, even though the results appear to match each other, Splunk does se... See more...
I am having a random issue where it seems characters are present in a field which cannot be seen. If you look in the results below, even though the results appear to match each other, Splunk does see these as 2 distinct values.  If I download and open the results, one of the two names has characters in it that are not seen when looking at the results in the Search App. If I open the file in my text editor, one of the two names is in quotes, if I open the file in Excel, one of the two names is preceded by ‚Äã. It feels like a problem with the underlying  lookup files (.csv),  however this problem is not consistent, only a very small percentage of results has this incorrect format (<.005%).  Trying to use regex or replace to remove non-alphanumeric values in a field does not seem to work, I am at a loss with it.  Any idea how to remove "non-visible" characters or correct this formatting?  
please consider upvote a new Splunk idea to get more attention: https://ideas.splunk.com/ideas/EID-I-2226
Yes i tried but in my case need to extract whole content.payload as one field.
Hi @Mfmahdi, you could truncate your events defining the max lenght of each event using the TRUNCATE option in props.conf. Otherwise you could define a regex to exclude from each event the part tha... See more...
Hi @Mfmahdi, you could truncate your events defining the max lenght of each event using the TRUNCATE option in props.conf. Otherwise you could define a regex to exclude from each event the part that you don't want. You should use the SEDCMD command in props.conf For more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.0/admin/Propsconf Ciao. Giuseppe
SO Rock On! from the GUI one is able to create a default view and pass that to all the users.  Bummer in that it looks like those settings are in a KV Store somewhere and not in a CONF?
I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a va... See more...
I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a valid result: index=my_index | stats max(_time) as latest_event_time | eval time_difference_hours = (now() - latest_event_time) / 3600 | table time_difference_hours Result: 20.646666667 When I go in and enable the alert, I set the alert to run every every. Additional I choose a custom condition as the trigger and use the following: eval time_difference_hours > 4 But the alert does not trigger. As you can see based on the result, it has been 20 hours since the last event was received in Splunk. Not sure what I am missing. I have also modified the query to include a time span with earliest=-24H and latest=now, but that did work either.