All Topics

Top

All Topics

Hi Splunkers, for our customer we need to populate an external lookup. We are on a Splunk SaaS env. A colleague has developed a simple app to achieve this purpose. After some test, the lookup seems ... See more...
Hi Splunkers, for our customer we need to populate an external lookup. We are on a Splunk SaaS env. A colleague has developed a simple app to achieve this purpose. After some test, the lookup seems to be populated fine. Our current problem is: if we use this lookup in a search executed from Search and Reporting app, it return expected results. No issue, no missing data. But, if we try from another app able to execute a search (I mean, with search function available), on the same data set and time range, output is empty. We suspect it's related to a permission problem (may be the app has no permission to write on underlying file system, due we are in a cloud env?),  but we are not sure. Moreover, even if we are right how could be fix the issue? 
I am trying to create a popup that should open as soon as you click on the link of the dashbaord ...However on entering the html code I am getting the following error - "Entity 'times' not defined"  ... See more...
I am trying to create a popup that should open as soon as you click on the link of the dashbaord ...However on entering the html code I am getting the following error - "Entity 'times' not defined"  ...Can anyone please tell me how to fix this issue ? It is a bit urgent so a quick help would be appreciated a lot....and here is the tag : <span class="close">&times;</span> 
Let suppose, we want to create a dashboard on notable index, which is shared with all team, but we want to create a dashboard for every team to see notables related to them.   But we want to restri... See more...
Let suppose, we want to create a dashboard on notable index, which is shared with all team, but we want to create a dashboard for every team to see notables related to them.   But we want to restrict them to only see this specific search and they can not even change the search when they click the search icon on the dashboard.   is it possible?   or how can we segregate notable in a single ES deployment?
I'm looking to run a |rest command to return a list of apps, and app versions sent from the management node (i.e.  manager-apps).  I'm only seeing an option that will return local apps (/opt/splunk/e... See more...
I'm looking to run a |rest command to return a list of apps, and app versions sent from the management node (i.e.  manager-apps).  I'm only seeing an option that will return local apps (/opt/splunk/etc/apps), nothing from /opt/splunk/etc/manager-apps. 
Can anyone please provide me .js file for displaying a popup in my splunk dashboard.. ?
Getting below errors while importing splunklib and splunk-sdk python packages. Any resolutions please? Building wheels for collected packages: pycrypto Building wheel for pycrypto (pyproject.toml) ... See more...
Getting below errors while importing splunklib and splunk-sdk python packages. Any resolutions please? Building wheels for collected packages: pycrypto Building wheel for pycrypto (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pycrypto (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [28 lines of output] warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastmath. winrand.c ............   [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pycrypto Failed to build pycrypto ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects
Hello all, I'm trying to get a duration between the first "started" event, and the first "connected" event following started, grouped by each user id.   The Data I'm trying to get an event that i... See more...
Hello all, I'm trying to get a duration between the first "started" event, and the first "connected" event following started, grouped by each user id.   The Data I'm trying to get an event that is going to be structured like the following (assume these have all have real timestamps. I am abbreviating it to be short. The item numbers on the left are for annotation purposes only) (item no. for annotation purposes only) userId status _time (abbreviated) 0 1 started 00:00 1 1 connected 00:05 2 2 started 00:30 3 2 connected 00:40 4 2 connected 01:30 5 4 started 02:00 6 3 connected 02:05 7 3 started 02:10 8 3 connected 02:20 9 4 connected 02:30 10 5 started 3:00 What i'm looking to achieve: A) I need to make sure i start the clock whenever the user has a "started" state.  (e.g., item no. 6 should be neglected) B) It must take the first connected event following "started". (e.g., item no. 3 is the end item, with item no.4 being ignored completely) C) I want to graph the number of users bucketed by intervals of 15 seconds. D) There must be a start and connected event. (e.g. userId 5 would not be added) How would i approach this?  I tried to do the following: ... status="started" OR status="connected" | stats range(_time) AS duration BY userId | where duration > 0 | bin span 15 duration | stats dc(userid) as Users by duration   But this isn't  quite doing what I want it to do.  And, I also get events where there's no duration.
Hi Splunk Community,  I'm trying to list all splunk local users (authentication system = splunk) . The below search lists all users SAML and Splunk but I'm only looking for local accounts. | rest /... See more...
Hi Splunk Community,  I'm trying to list all splunk local users (authentication system = splunk) . The below search lists all users SAML and Splunk but I'm only looking for local accounts. | rest /services/authentication/users splunk_server=local | fields roles title realname | rename title as username Thanks!
Does anybody have a better doc for this page, I think it's a copy and paste gone wrong. Th UiPath configuration is mixed with the Splunk UF configuration for Windows. rpm_app_for_splunk/docs/UiPath_... See more...
Does anybody have a better doc for this page, I think it's a copy and paste gone wrong. Th UiPath configuration is mixed with the Splunk UF configuration for Windows. rpm_app_for_splunk/docs/UiPath_orchestrator_nLog.MD at main · splunk/rpm_app_for_splunk · GitHub
I have a simply Splunk set-up.  about 120 or so Linux servers (that are all basically appliances) w/ universal forwarder installed, and a single Linux server running Splunk Enterprise acting as the i... See more...
I have a simply Splunk set-up.  about 120 or so Linux servers (that are all basically appliances) w/ universal forwarder installed, and a single Linux server running Splunk Enterprise acting as the indexer, search head, etc.   The problem I have is that the forwarders must feed the server's audit log into Splunk.  That feed is actually working fine, but it's flooding the server, and causing me to go over my license limit.   Specifically, the appliance app has an event in cron that runs very often, and it's flooding the audit log with file access, file mod, etc events, which is ballooning the amount of data I send to Splunk Enterprise.  Data that IO simply do not need.   What I want to do is filter out these specific events, but ONLY for this specific user.  I believe this can be done using transforms.conf and props.conf  on the indexer, but I'm having trouble getting the syntax and fields right. Can anyone assist with this? Here's the data I need to remove...   sourcetype=auditd acct=appuser exe=/usr/sbin/crond exe=/usr/bin/crontab   So basically ANY events in the audit log for user "appuser" that reference either "/usr/bin/crontab" or "usr/bin/crontab" need to be dropped.   Here are 2 examples of the events I want to drop. type=USER_END msg=audit(03/04/2024 15:58:02.701:5726) : pid=26919 uid=root auid=appuser ses=184 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct=appuser exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success' type=USER_ACCT msg=audit(03/04/2024 15:58:02.488:5723) : pid=26947 uid=appuser auid=appuser ses=184 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_localuser acct=appuser exe=/usr/bin/crontab hostname=? addr=? terminal=cron res=success' Can this be done?
hi i would like some help doing an eval function where based on 3 values of fields will determine if the eval field value be either OK or BAD   example these are the 4 fields in total (hostname, ... See more...
hi i would like some help doing an eval function where based on 3 values of fields will determine if the eval field value be either OK or BAD   example these are the 4 fields in total (hostname, "chassis ready", result, synchronize) hostname= alpha "chassis ready"=yes result=pass synchronize=no hostname= beta "chassis ready"=yes result=pass synchronize=yes hostname= charlie "chassis ready"=no result=pass synchronize=yes   i would like to do an eval for 'overallpass' where if   ("chassis ready"=yes    result=pass   synchronize=yes) will make 'overallpass' = OK and everything else will be overallpass = 'Not Okay" by hostname   so based on the top table,  here is the final output. ************************************* Hostname         overallpass alpha                   Not Okay bravo                   OK charlie                 Not Okay  
Hi,  I have created the dashboard with multiple panels. I have created the time range panel to be reflected as last 4 hours from my end . it is perfectly working. When customer is trying to use... See more...
Hi,  I have created the dashboard with multiple panels. I have created the time range panel to be reflected as last 4 hours from my end . it is perfectly working. When customer is trying to use the dashboard it is showing as TimeRangepanel not found. Please assist me
I have a query where I am counting the PASS and fail and displaying it as a pie-chart.Also I modified the search so that it displays the count and status .When the status field which has pass and fai... See more...
I have a query where I am counting the PASS and fail and displaying it as a pie-chart.Also I modified the search so that it displays the count and status .When the status field which has pass and fail values the pie chart displays green for pass and red for  fail as expected but when there is only pass it displays red not green Attached is the screenshot       <chart> <search> <query>index="abc" | rex field=source "ame\/(?&lt;Type&gt;[^\/]+)" |search Type=$tok_type$ | rex field=_raw "(?i)^[^ ]* (?P&lt;status&gt;.+)" | stats latest(status) as status by host | stats count by status | eval chart = count + " " + status | fields chart, count</query> <earliest>$tok_time.earliest$</earliest> <latest>$tok_time.latest$</latest> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.legend.labels">[FAIL,PASS]</option> <option name="charting.seriesColors">[#BA0F30,#116530]</option> <option name="refresh.display">progressbar</option> <option name="charting.chart.showPercent">true</option> </chart>         Thanks in Advance 
I am having a random issue where it seems characters are present in a field which cannot be seen. If you look in the results below, even though the results appear to match each other, Splunk does se... See more...
I am having a random issue where it seems characters are present in a field which cannot be seen. If you look in the results below, even though the results appear to match each other, Splunk does see these as 2 distinct values.  If I download and open the results, one of the two names has characters in it that are not seen when looking at the results in the Search App. If I open the file in my text editor, one of the two names is in quotes, if I open the file in Excel, one of the two names is preceded by ‚Äã. It feels like a problem with the underlying  lookup files (.csv),  however this problem is not consistent, only a very small percentage of results has this incorrect format (<.005%).  Trying to use regex or replace to remove non-alphanumeric values in a field does not seem to work, I am at a loss with it.  Any idea how to remove "non-visible" characters or correct this formatting?  
I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a va... See more...
I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a valid result: index=my_index | stats max(_time) as latest_event_time | eval time_difference_hours = (now() - latest_event_time) / 3600 | table time_difference_hours Result: 20.646666667 When I go in and enable the alert, I set the alert to run every every. Additional I choose a custom condition as the trigger and use the following: eval time_difference_hours > 4 But the alert does not trigger. As you can see based on the result, it has been 20 hours since the last event was received in Splunk. Not sure what I am missing. I have also modified the query to include a time span with earliest=-24H and latest=now, but that did work either.    
Splunk version: splunk-9.2.0.1 Host: Linux (Rocky 9) Hello, I am a new user testing Splunk. I installed the instance on Linux (Rocky 9). From reading various Q&A and docs, I see the location to ch... See more...
Splunk version: splunk-9.2.0.1 Host: Linux (Rocky 9) Hello, I am a new user testing Splunk. I installed the instance on Linux (Rocky 9). From reading various Q&A and docs, I see the location to change the instance address/IP and port is in a file within the installation directory called splunk-launch.conf, though it doesn't look like this file exists anymore. Please guide me through changing these settings in the latest version of Splunk (9.2.0.1) in Unix CLI. My goal is to change the web interface address from http://alpha:8000 to http://beta:8000. Thank you.
Hi All,   I have an alert that shows results for 7:00 Am to 7:01 AM with more than 20 results. the cron for the alert is: * 6-15 * * 1-5 condition: more than 4 results   I checked and found the... See more...
Hi All,   I have an alert that shows results for 7:00 Am to 7:01 AM with more than 20 results. the cron for the alert is: * 6-15 * * 1-5 condition: more than 4 results   I checked and found there were more than 4 results in the timefram 7:00 AM to 7:01 AM but the alert did not trigger an email alert.   Though the same alert did trigger at 8 AM.   On checking the internal logs I can see that at 7 AM the alert_actions="", but at 8 AM I can see alert_actions="email" which confirms that there was no email action.   What all things can I check further to check and confirm?
we are getting WAF log and the events are very big we need to drop some lines from the events that has no meaningful value not the whole event. @gcusello  thank you in advance.
Installed Splunk Add-on for Unix and Linux 9.0.0 not getting memory data for ubuntu server? Checks performed 1) Getting data for logical disk space and cpu but not memory 2) sar utility is insta... See more...
Installed Splunk Add-on for Unix and Linux 9.0.0 not getting memory data for ubuntu server? Checks performed 1) Getting data for logical disk space and cpu but not memory 2) sar utility is installed  enabled hardware, CPU, and df metric stanzas added index details too.
So, I have one source (transactions) with userNumber and another source (users) with number. I want to join both of them. In each source, they have different field names. I want my table to have the ... See more...
So, I have one source (transactions) with userNumber and another source (users) with number. I want to join both of them. In each source, they have different field names. I want my table to have the employees name, which in in source users, which I get in my 2nd query in the join separately. Below is my SPL as of now: index=* sourcetype=transaction | stats dc(PARENT_ACCOUNT) as transactionMade by POSTDATE, USERNUMBER | join left=L right=R where L.USERNUMBER=R.NUMBER [search sourcetype=users | stats values(NAME) as Employee by NUMBER] | table USERNUMBER Employee PARENT_ACCOUNT POSTDATE transactionMade What is it that I am doing wrong?