All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The problem that seriesColours is just a list of colours that are used in order, so if there are two rows, the FAIL row is always first, so the first colour in the series applies. I believe the only... See more...
The problem that seriesColours is just a list of colours that are used in order, so if there are two rows, the FAIL row is always first, so the first colour in the series applies. I believe the only way to solve this is by adding a <done> clause after the search to calculate what the series colours should be and then use the token, like this <search> ... <done> <eval token="series_colour">case($job.resultCount$=2, "#BA0F30,#116530", $job.resultCount$=1 AND match($result.chart$,"FAIL"), "#BA0F30", $job.resultCount$=1 AND match($result.chart$,"PASS"), "#116530")</eval> </done> </search> ... <option name="charting.seriesColors">[$series_colour$]</option> So, the eval part of the done clause will check if there are two rows, then the series has two values, otherwise it checks the chart field to see if it is FAIL or PASS and sets the single series values as appropriate. Then in the seriesColors statement, use the token.  
Is this a classic or dashboard studio dashboard? Is TimeRangepanel the name of the dashboard or something else? How are you populating the dashboard panels - if it's from a saved panel or the panel... See more...
Is this a classic or dashboard studio dashboard? Is TimeRangepanel the name of the dashboard or something else? How are you populating the dashboard panels - if it's from a saved panel or the panel is coming from a report, is the report private or is it a public report?  It sounds like the customer does not have permissions to see some part of the dashboard.  
Use this | eval overallpass=if('chassis ready'="yes" AND result="pass" AND synchronize="yes", "OK", "Not Okay") | stats values(overallpass) as overallpass by hostname NB: The chassis ready must be ... See more...
Use this | eval overallpass=if('chassis ready'="yes" AND result="pass" AND synchronize="yes", "OK", "Not Okay") | stats values(overallpass) as overallpass by hostname NB: The chassis ready must be surrounded by SINGLE quotes as eval needs to understand the space is part of the field name. The values must be double quoted.  
Here I am, over 3 years later, finding my own answer to help me out again  
Hey @isoutamo , May be I found the fix I noticed inputs.conf on splunk indexer side was not having port mentioned on one of the block  Added "[splunktcp:<PORT>]" in $SPLUNK_BASE/etc/system/local o... See more...
Hey @isoutamo , May be I found the fix I noticed inputs.conf on splunk indexer side was not having port mentioned on one of the block  Added "[splunktcp:<PORT>]" in $SPLUNK_BASE/etc/system/local on indexers This fixed issue
Maybe? The prototype is here for anyone to grab. I'd need to find the time and resources for long-term maintenance of an app: development, build and integration, support, etc.
I have a simply Splunk set-up.  about 120 or so Linux servers (that are all basically appliances) w/ universal forwarder installed, and a single Linux server running Splunk Enterprise acting as the i... See more...
I have a simply Splunk set-up.  about 120 or so Linux servers (that are all basically appliances) w/ universal forwarder installed, and a single Linux server running Splunk Enterprise acting as the indexer, search head, etc.   The problem I have is that the forwarders must feed the server's audit log into Splunk.  That feed is actually working fine, but it's flooding the server, and causing me to go over my license limit.   Specifically, the appliance app has an event in cron that runs very often, and it's flooding the audit log with file access, file mod, etc events, which is ballooning the amount of data I send to Splunk Enterprise.  Data that IO simply do not need.   What I want to do is filter out these specific events, but ONLY for this specific user.  I believe this can be done using transforms.conf and props.conf  on the indexer, but I'm having trouble getting the syntax and fields right. Can anyone assist with this? Here's the data I need to remove...   sourcetype=auditd acct=appuser exe=/usr/sbin/crond exe=/usr/bin/crontab   So basically ANY events in the audit log for user "appuser" that reference either "/usr/bin/crontab" or "usr/bin/crontab" need to be dropped.   Here are 2 examples of the events I want to drop. type=USER_END msg=audit(03/04/2024 15:58:02.701:5726) : pid=26919 uid=root auid=appuser ses=184 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct=appuser exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success' type=USER_ACCT msg=audit(03/04/2024 15:58:02.488:5723) : pid=26947 uid=appuser auid=appuser ses=184 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_localuser acct=appuser exe=/usr/bin/crontab hostname=? addr=? terminal=cron res=success' Can this be done?
The SqlServer module is available for PowerShell Core on various platforms. I can test a solution on Linux x86-64, but I don't have access to a macOS or ARM host. What platform is Server C?
hi i would like some help doing an eval function where based on 3 values of fields will determine if the eval field value be either OK or BAD   example these are the 4 fields in total (hostname, ... See more...
hi i would like some help doing an eval function where based on 3 values of fields will determine if the eval field value be either OK or BAD   example these are the 4 fields in total (hostname, "chassis ready", result, synchronize) hostname= alpha "chassis ready"=yes result=pass synchronize=no hostname= beta "chassis ready"=yes result=pass synchronize=yes hostname= charlie "chassis ready"=no result=pass synchronize=yes   i would like to do an eval for 'overallpass' where if   ("chassis ready"=yes    result=pass   synchronize=yes) will make 'overallpass' = OK and everything else will be overallpass = 'Not Okay" by hostname   so based on the top table,  here is the final output. ************************************* Hostname         overallpass alpha                   Not Okay bravo                   OK charlie                 Not Okay  
A custom segmenter has merit in this case, but globally, folks will recommend tagging events with an appropriate add-on (or custom configuration) and using an accelerated Web or other data model to f... See more...
A custom segmenter has merit in this case, but globally, folks will recommend tagging events with an appropriate add-on (or custom configuration) and using an accelerated Web or other data model to find matching URLs, hostnames, etc.
Hello @tscroggins    Thank you for your follow up. However, I am not sure that we can apply this solution to our environment since our Server C (the one with the universal forwarder) is not a Windo... See more...
Hello @tscroggins    Thank you for your follow up. However, I am not sure that we can apply this solution to our environment since our Server C (the one with the universal forwarder) is not a Windows server . The other servers (type A and B) are not connected to any external networks. After some internal discussions, a new idea was proposed and I am not sure if it could work, @tscroggins , @isoutamo , @scelikok I appreciate if you share your feed backs about it. The idea is to deploy a heavy forwarder on server B and a Universal forwarder on server C (the one connected to Splunk cloud) Servers A (DB server): Our databases generate SQLAudit files (and probably some Oracle DB audit files from similar servers). No external connections are allowed to these category of servers. Server B (Relay): This is the only server that can establish communications with the DB servers (category A). On this server, we can install a havy forwarder + DB connect to collect MSsql audit logs ( and Oracle audit logs from oracle servers). Please note that there are no external connections to this server and it cannot directly forwarder to Splunk Cloud. Server C (Universal Forwarder): The only one with allowed external connections. Havy forwarder on server B sends the collected logs to the universal forwarder on server C.  The Universal forwarder then uploads the SQLAudit files and oracle audit files to the Splunk Cloud instance.     Do you think that this is a feasable set up ?  Best regards,  
Hi @AJ_splunk1, just a heads-up that https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers states that " In particular, there is a new system-generated app, etc/... See more...
Hi @AJ_splunk1, just a heads-up that https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers states that " In particular, there is a new system-generated app, etc/apps/SplunkDeploymentServerConfig, which contains configuration files necessary to the proper functioning of the deployment server. Do not alter this directory or its files in any way." I chose to implement the 9.2 fix (also shown on the page link above) as a separate app in \etc\apps on the ds and just called something like "Fix_DSClientList" so it's more obvious there's a modification in place. I hope that helps.
nvm figured it out. It was the output.conf in this app - etc/apps/SplunkDeploymentServerConfig. Documentation is a bit confusing for this
Are you adding additional ip / name to your host or just switch ip-name mapping? 
UDP is not reliable protocol by design! With it you will always lost events. Only question is, how much? Answer is unknown amount.
Splunk 9.2.x has some new features/ framework for DS. That may helps in your kind of environments or not? But as it’s a x.x.0 version I don’t put it into production yet!
Does this work with splunk cloud as well? WE have splunk onprem deployment server, indexers are all in the cloud and experiencing the same where clients are not showing up after an update to 9.2.x. T... See more...
Does this work with splunk cloud as well? WE have splunk onprem deployment server, indexers are all in the cloud and experiencing the same where clients are not showing up after an update to 9.2.x. They are phoning home however as per the logs
Hi,  I have created the dashboard with multiple panels. I have created the time range panel to be reflected as last 4 hours from my end . it is perfectly working. When customer is trying to use... See more...
Hi,  I have created the dashboard with multiple panels. I have created the time range panel to be reflected as last 4 hours from my end . it is perfectly working. When customer is trying to use the dashboard it is showing as TimeRangepanel not found. Please assist me
It comes down to whether Kafka can transport UDP and TCP reliably from all the devices to the various Splunk clusters.
I have a query where I am counting the PASS and fail and displaying it as a pie-chart.Also I modified the search so that it displays the count and status .When the status field which has pass and fai... See more...
I have a query where I am counting the PASS and fail and displaying it as a pie-chart.Also I modified the search so that it displays the count and status .When the status field which has pass and fail values the pie chart displays green for pass and red for  fail as expected but when there is only pass it displays red not green Attached is the screenshot       <chart> <search> <query>index="abc" | rex field=source "ame\/(?&lt;Type&gt;[^\/]+)" |search Type=$tok_type$ | rex field=_raw "(?i)^[^ ]* (?P&lt;status&gt;.+)" | stats latest(status) as status by host | stats count by status | eval chart = count + " " + status | fields chart, count</query> <earliest>$tok_time.earliest$</earliest> <latest>$tok_time.latest$</latest> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.legend.labels">[FAIL,PASS]</option> <option name="charting.seriesColors">[#BA0F30,#116530]</option> <option name="refresh.display">progressbar</option> <option name="charting.chart.showPercent">true</option> </chart>         Thanks in Advance