All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I'm having these error messages - Streamed seach execute failed beacuse: Error in 'lookup' command: Could not construct lookup 'simple_identity_lookup' ... After including the "| inputlook... See more...
Hi all, I'm having these error messages - Streamed seach execute failed beacuse: Error in 'lookup' command: Could not construct lookup 'simple_identity_lookup' ... After including the "| inputlookup local=t simple_identity_lookup" in my search, it seems to be running fine. My question is, why is this happening, could it be that the bundles could not replicate to the indexers? If so, what port is the default for bundle replication from SHC to Indexers? So that I will need to allow the traffic from the ports If not, what other possible causes may there be?
Is it possible to filter search result rows by a search expression which can be applied to all fields of a row? According to the documentation for regex it appears you should be able to use it witho... See more...
Is it possible to filter search result rows by a search expression which can be applied to all fields of a row? According to the documentation for regex it appears you should be able to use it without specifying a field:   | ... | regex "some regex search string"   However when I give it a try, it yields no results. I did find this while searching the internet:   | ... | eval matchCount=0 | foreach * [eval matchCount = matchCount + if(match(<<FIELD>>, "my regex search string"), 1, 0) ] | where matchCount > 0    However I was wondering if there was a way to do this without adding the 'matchCount' column.
Hello! We have a requierment to create an alert for one of the cloud application data. The following fields are like account name, account id etc should be sent to the repective RemediationContactEm... See more...
Hello! We have a requierment to create an alert for one of the cloud application data. The following fields are like account name, account id etc should be sent to the repective RemediationContactEmail id. we are able to create an alert with all the above with csv attachment by using command sendemail However we observed that for particular set of results , if the recipients are same , in that case they will be receiving email for each results. For example We tried below sample query to make some sample event sets using makeresults : | makeresults | eval id="12345" | eval Account_ID=1234567 | eval Remediation_Contact_Email="abc123@xyz.com" | append     [| makeresults     | eval id="67890"     | eval Account_ID=4567895      | eval Remediation_Contact_Email="abc123@xyz.com" ] | append     [| makeresults     | eval id="13579"     | eval Account_ID=6785432    | eval Remediation_Contact_Email="abc123@xyz.com" ] | map     [ makeresults     | eval id="$id$"     | eval Account_ID=$Account_ID$     | eval Remediation_Contact_Email="$Remediation_Contact_Email$"     | fields - _time     | sendemail to=$Remediation_Contact_Email$ subject="Test Sendemail" message=" Hello, There is an alert for your account  id  : $id$  account id : $Account_ID$  Regards, xyz Security Operation Team" maxinputs=10000 sendcsv=true inline=true format=csv priority=1 ] Here the recipient "abc123@xyz.com" received 3 different emails for each result with attachments as shown in the bellow screenshot.  Any help or guidance will be much appreciated here to group all the relevant results in data set with respect to remidiation contact email id and send their results in single attachment. We tried to group it using stats command however the attachment doesn’t look good as it will have a single row with all results for that particular email.we have more number of RemediationContactEmail id for each Account group in data set so if there are any 10 alerts triggered for one respective RemediationContactEmail id all the 10 alerts should be consolidated and grouped from data set then send it to that particullar recepient as one attachment rather than sending 10 different emails
I have a firewall that is sending the logs using UTC time. Actually all of our Network devices send the data using UTC. I extracted the fields of the log file because it’s separated by commas. The pr... See more...
I have a firewall that is sending the logs using UTC time. Actually all of our Network devices send the data using UTC. I extracted the fields of the log file because it’s separated by commas. The problem that I am having is that the Splunk server is using EDT (-4 from UTC). So when I do searches and If I select last 60 minutes or anything relative it shows the logs from 4 hours ago instead of the logs that happened from 60 minutes to now. This is the only device on my network having this problem in Splunk. I am a power user not the administrator because of how roles are separated where I work. I tried changing the DATETIME_CONFIG in the sourcetype to CURRENT in the sourcetype settings but still when conducting the searches it collects the data using UTC and doesn’t show the latest data. I have been getting away by always remembering to change the exact time window and adding 4 hours. In order to get my dashboards to present the right time I have to to use the time picker but it’s starting to get annoying. It has happened as I have been troubleshooting forgetting to change the time and going crazy as I am not able to find the correct traffic data. To pinpoint the problem. As I said earlier, I am not an administrator I am a power user only. So I do not have access to the syslog_ng server that collects the syslog data and forwards to the search head server. And I do not have access to the search head server, so I cannot change the props.conf file as I have seen people recommend. I am only able to make changes in the Splunk web portal. Any help is appreciated. 
Hi Splunk Guys, Please help me with the query. My requirement is, I need search server count in here based on the version for 3 different time stamps. REsult as shown in the below Version Type... See more...
Hi Splunk Guys, Please help me with the query. My requirement is, I need search server count in here based on the version for 3 different time stamps. REsult as shown in the below Version Type 30 days 60 Days 90 Days 1.1.1.00 7 9 18 2.1.3.4 8 10 14 3.1.4.6 10 15 18 Index=_internal sourcetype=servers_list earlist=-30d | stats count(Parameter) as ServersCount_Last30Days by Value  The above query gives me last 30 days result. 
I am creating a single series timechart that does a count of ids and then I am using | appendpipe [stats count | where count=0] to display 0 instead of 'No results found'. The timechart shows up rig... See more...
I am creating a single series timechart that does a count of ids and then I am using | appendpipe [stats count | where count=0] to display 0 instead of 'No results found'. The timechart shows up right but in addition also shows undefined NaN like shown below. Can anyone please help suggest why is it happening and how to get rid of it. Thanks.  
HI I am getting the below error. But I do not make changes in default location but still i got this issue. Can someone please help me to resolve this ?   How we can resolve this issue, ?   Chec... See more...
HI I am getting the below error. But I do not make changes in default location but still i got this issue. Can someone please help me to resolve this ?   How we can resolve this issue, ?   Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk/splun k-7.2.5.1-962d9a8e1586-linux-2.6-x86_64-manifest' Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/alert _actions.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/app.c onf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/colle ctions.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/comma nds.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/input s.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/macro s.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/props .conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/restm ap.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/saved searches.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/searc hbnf.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/telem etry.conf': No such file or directory Could not open '/opt/splunk/splunk/etc/apps/splunk_instrumentation/default/web.c onf': No such file or directory Problems were found, please review your files and move customizations to local     Thanks Regards Pankaj
Hi, Had a customer who was using a TA to get data from Cisco ESA into Splunk. They wondered whether or not it was possible to get multiline-events into Splunk from different data sources at differen... See more...
Hi, Had a customer who was using a TA to get data from Cisco ESA into Splunk. They wondered whether or not it was possible to get multiline-events into Splunk from different data sources at different times and not have duplicate events as a result in Splunk. Any help on this issue would be greatly appreciated.
Hi Team, We got a requirement to create health-rule if the channel is in RETRYING mode need to get alert ,which means value of retrying mode is 2 . so have tried to configure heath-rule but equalto ... See more...
Hi Team, We got a requirement to create health-rule if the channel is in RETRYING mode need to get alert ,which means value of retrying mode is 2 . so have tried to configure heath-rule but equalto option is not available , so I have created two condition one is specificvalue >1 , specificvalue . In this case I need to use (A and B) else need to use (A or B) to get expected output . Please suggest is there is any other best practice to get expected output . Regards, Vino
Please I am looking for a query to search for the top alerts that fired within 2 weeks (or within a time frame). I am also looking for a query to show anomalies within a time frame
Hi team,   How can I add the below two queries into one single query and present in a single table. query 1 : index="dev_envi" | chart count(ApplicationName) over ApplicationName by Status |addtot... See more...
Hi team,   How can I add the below two queries into one single query and present in a single table. query 1 : index="dev_envi" | chart count(ApplicationName) over ApplicationName by Status |addtotals Result :   app1 5                   app2 8 query 2 : index="dev_envi" | chart count(event.ApplicationName) over event.ApplicationName by event.Status |addtotals Result :  app1  3                  app2  6   Now I need a single query to add above both values and display in Dashboard like below (adding above both table data): app1   8 app2  14
Having issues with one user trying to authenticate into Splunk. We're using LDAP auth. User has the same primary group as another individual that can log in. That primary group is used to grant acc... See more...
Having issues with one user trying to authenticate into Splunk. We're using LDAP auth. User has the same primary group as another individual that can log in. That primary group is used to grant access to Splunk. User does not have any other group memberships that are mapped in Splunk for authentication, so no conflicts that I can tell. User is in the same OU as users that can authenticate. Only have 1 LDAP strategy, and only this 1 user is affected. Have confirmed that the user used for the LDAP strategy can query and see the affected user via Get-Aduser. One thing I noticed in splunkd.log is the search filter appears a bit odd. 09-10-2020 09:30:35.191 -0700 DEBUG AuthenticationManagerLDAP - Attempting to get roles for user="flastname" with DN="CN=Last\, First,OU=OU2,OU=OU1,OU=Users,DC=company,DC=com" in strategy="Company-LDAP-USERROLE" 09-10-2020 09:30:35.194 -0700 ERROR AuthenticationManagerLDAP - Couldn't find matching groups for user="flastname". Search filter="(&(member=CN=Last\5C, First,OU=OU2,OU=OU1,OU=Users,DC=company,DC=com)(|(CN=USERROLE*)(CN=OTHERUSERROLE*)))" strategy="Company-LDAP-USERROLE" In the filter I see what looks to be an added 5C, which is hex code for \ in ASCII. Is it adding an additional piece that shouldn't be there? Might be a red herring though.
I am attempting to use the "TA-Sysmon-deploy" Splunkbase app to deploy and maintain Sysmon on our endpoints. I've noticed that the script which checks for sysmon then installs it does run correctly. ... See more...
I am attempting to use the "TA-Sysmon-deploy" Splunkbase app to deploy and maintain Sysmon on our endpoints. I've noticed that the script which checks for sysmon then installs it does run correctly. It always results in a "sysmon not found" situation and re-installs it. This is expected activity if the script does not see sysmon running or it detects is out of date. Nonetheless, the script completes each time by installing sysmon again and again, even thought the host has the proper version of sysmon installed and running. The peculiar thing here is that it works correctly if I run the batch script manually from an Admin (as system) command prompt but not when run by the Splunk Universal Forwarder. I've added an Echo statement so I can the check script variables just before they go into the deployment IF statements. Theyare correct when manually run but are not when executed by Splunk. Any comments or suggestions would be helpful. I have included sample logs and the script below. Thank you, Ken sysmon.log when Splunk runs the batch file via input setting. Thu 09/10/2020- 9:19:40.03 The SplunkUniversalForwarder is installed at C:\Program Files\SplunkUniversalForwarder Thu 09/10/2020- 9:19:40.03 Checking for Sysmon CHECK_SYSMON_VERSION="" CHECK_SYSMON_RUNNIG="" Thu 09/10/2020- 9:19:40.03 Sysmon not found, proceding to install Thu 09/10/2020- 9:19:40.03 Copying the latest config file 0% copied 100% copied 1 file(s) copied. Thu 09/10/2020- 9:19:40.03 Installing Sysmon Thu 09/10/2020- 9:19:40.03 Install complete! sysmon.log when run from and Admin command prompt (as "system") Wed 09/09/2020- 9:08:59.03 The SplunkUniversalForwarder is installed at C:\Program Files\SplunkUniversalForwarder Wed 09/09/2020- 9:08:59.03 Checking for Sysmon CHECK_SYSMON_RUNNIG="1" CHECK_SYSMON_VERSION="1" Wed 09/09/2020- 9:08:59.03 Sysmon found, checking version Wed 09/09/2020- 9:08:59.03 Sysmon already up to date, exiting Here is the script from the deploy.bat file. This batch file is part of "TA-Sysmon-deploy" from Splunkbase. I have added the following to the script while troubleshooting. - SETLOCAL and ENDLOCAL: removes any outside the script variable influences - Enclosed the version check FOR statement in an IF EXIST clause, the script seemed to error out if sysmon.exe did not exist) - added variable output "echo" statements so I can see the variable in the logs just before the IF statements. TA's deploy.bat file     ECHO OFF SETLOCAL FOR /F "delims=" %%i IN ('wmic service SplunkForwarder get Pathname ^| FINDSTR /m service') DO SET SPLUNKDPATH=%%i SET SPLUNKPATH=%SPLUNKDPATH:~1,-28% >> %WINDIR%\sysmon.log ( ECHO %DATE%-%TIME% The SplunkUniversalForwarder is installed at %SPLUNKPATH% ECHO %DATE%-%TIME% Checking for Sysmon FOR /F "delims=" %%c IN ('sc query "Sysmon" ^| FIND /c "RUNNING"') DO ( SET CHECK_SYSMON_RUNNIG=%%c ) IF EXIST %WINDIR%\sysmon.exe ( FOR /F "delims=" %%b IN ('c:\windows\sysmon.exe ^| FIND /c "System Monitor v11.11"') DO ( SET CHECK_SYSMON_VERSION=%%b ) ) ECHO CHECK_SYSMON_VERSION="%CHECK_SYSMON_VERSION%" ECHO CHECK_SYSMON_RUNNIG="%CHECK_SYSMON_RUNNIG%" if "%CHECK_SYSMON_RUNNIG%" == "1" ( ECHO %DATE%-%TIME% Sysmon found, checking version IF "%CHECK_SYSMON_VERSION%" == "1" ( ECHO %DATE%-%TIME% Sysmon already up to date, exiting ENDLOCAL EXIT ) ELSE ( ECHO %DATE%-%TIME% Sysmon binary is outdated, un-installing IF EXIST %WINDIR%\sysmon.exe ( %WINDIR%\sysmon.exe -u ) ) ) ELSE ( ECHO %DATE%-%TIME% Sysmon not found, proceding to install ECHO %DATE%-%TIME% Copying the latest config file COPY /z /y "%SPLUNKPATH%\etc\apps\TA-Sysmon-deploy\bin\config.xml" "C:\windows\" ECHO %DATE%-%TIME% Installing Sysmon "%SPLUNKPATH%\etc\apps\TA-Sysmon-deploy\bin\sysmon.exe" /accepteula -i c:\windows\config.xml | Find /c "Sysmon installed" 1>NUL ECHO %DATE%-%TIME% Install complete! ENDLOCAL EXIT ) ECHO %DATE%-%TIME% Install failed ) ENDLOCAL
I am polling the Alpaca stock API and getting the following json payload: {"APPL":[{"t":1599753600,"o":12.72,"h":12.785,"l":12.72,"c":12.785,"v":524}],"AMZN":[{"t":1599753600,"o":13.99,"h":13.99,"l"... See more...
I am polling the Alpaca stock API and getting the following json payload: {"APPL":[{"t":1599753600,"o":12.72,"h":12.785,"l":12.72,"c":12.785,"v":524}],"AMZN":[{"t":1599753600,"o":13.99,"h":13.99,"l":13.99,"c":13.99,"v":200}],"GOOG":[{"t":1599753600,"o":33.69,"h":33.82,"l":33.69,"c":33.82,"v":3372}],"XOM":[{"t":1599753600,"o":37.67,"h":37.78,"l":37.66,"c":37.76,"v":10404}]} Splunk is extracting the following fields for example: XOM{}.c=37.76 XOM{}.h=37.78 XOM{}.l=37.66 XOM{}.o=37.67 XOM{}.t=1599753600 XOM{}.v=10404 I'm trying to timechart the different stocks 'c' values from yesterday where 'c' is the changing close value.  host="Alpaca:Stock" earliest=-1d@d+510m latest=-1d@d+15h | timechart span=5m values(*{}.c) | rename values(APPL{}.c) as APPL values(AMZN{}.c) as AMZN values(GOOG{}.c) as GOOG values(XOM{}.c) as XOM I would like to add a dropdown select to choose from the various stocks coming in the payload so the timechart panel only displays the selected stock symbol data. I have been having trouble finding a way to either use the stock symbol in the array path or something like click.name2 as the token. the  I've tried spath and not having any luck creating a new field to call as a token. Any help on this is greatly appreciated!
Hello everyone Anyone happen to know why when configuring the account on the AWS Addon I get this stack trace error:   09-10-2020 12:04:33.220 -0400 ERROR AdminManagerExternal - Stack trace from p... See more...
Hello everyone Anyone happen to know why when configuring the account on the AWS Addon I get this stack trace error:   09-10-2020 12:04:33.220 -0400 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 88, in init_persistent\n hand = handler(mode, ctxInfo, data)\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_cloudwatch_inputs_rh.py", line 33, in __init__\n **kwargs\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/base_input_rh.py", line 46, in __init__\n self._service = LocalServiceManager(app=tac.splunk_ta_aws, session_key=self.getSessionKey()).get_local_service()\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/local_manager.py", line 14, in __init__\n splunkd_host_port = self._get_entity(CONF_WEB, 'settings').get('mgmtHostPort', '127.0.0.1:8089')\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/local_manager.py", line 29, in _get_entity\n return entity.getEntity(path, name, sessionKey=self._session_key, namespace=self._app, owner=self._owner)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/entity.py", line 265, in getEntity\n serverResponse, serverContent = rest.simpleRequest(uri, getargs=kwargs, sessionKey=sessionKey, raiseAllErrors=True)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 487, in simpleRequest\n import httplib2\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/httplib2/__init__.py", line 475\n print("%s:" % h, end=" ", file=self._fp)\n ^\nSyntaxError: invalid syntax\n     Thanks for the help
Hi Team, How to write a regex to capture this two password from the logs ? Eg:  [20200527-144244] login login: cf_db_password=weblogic          [20200527-144244] login login: password=weblogi... See more...
Hi Team, How to write a regex to capture this two password from the logs ? Eg:  [20200527-144244] login login: cf_db_password=weblogic          [20200527-144244] login login: password=weblogic_test         [20200527-134842] login login: cf.db.password.hms=test_weblogic   password\.?\=([^\s]+) --> Using this regex I was able to capture the first two logs pattern.   password\.?\w+?\=([^\s]+)  --> Using this regex I was able to capture "D: [20200527-134842] login login: cf.db.password.hms=test_weblogic"    Question is how to write a regex pattern to capture all the password pattern from the above example.      
Hi Gurus, I have a couple of question regarding my first Splunk Technical add-on development: 1.  If I develop a Splunk technical add-on for Splunk 7.0 will it work on Splunk 8.0? 2. Will it suppo... See more...
Hi Gurus, I have a couple of question regarding my first Splunk Technical add-on development: 1.  If I develop a Splunk technical add-on for Splunk 7.0 will it work on Splunk 8.0? 2. Will it support Splunk cloud as well, or this requires a separate add-on? 3. What is the certification process for the Add-on? How long does it take? Any answers will be really helpful. Thanks and best regards Krishna
Hi Splunk Community  I am completely new on splunk. I somehow managed to deploy the splunk Universal Forwarder on many linux nodes and few windows systems.  I am able to view  the /var/log/secure a... See more...
Hi Splunk Community  I am completely new on splunk. I somehow managed to deploy the splunk Universal Forwarder on many linux nodes and few windows systems.  I am able to view  the /var/log/secure and /var/log/message that are getting indexed and the windows security event log on the created index. I want a dashboard that shows information of the below :- 1. Number of Hosts where splunk forwarder is deployed (linux and windows separate). 2. Successfull and failed login. 3. Alert when root is logged in linux and Administrator login in windows.
the below displays first login in the system. If user has no logon information, it should display "No logon found" in Amber color. When there is no result , it displays as "No logon found". I want t... See more...
the below displays first login in the system. If user has no logon information, it should display "No logon found" in Amber color. When there is no result , it displays as "No logon found". I want to color that value "No logon Found" single value panel     index=XXX sourcetype="XXXX" source="*.log" user=XXXX AAA | eval Authentication=case(XXXXXX) | eval CredType=case(XXXX) | eval ProductType=case(XXX) | rename xxx As "xxxy" | eval Time=strftime(_time,"%d %B %Y") | stats earliest(Time) AS FirstLogin| append [ stats count | where count=0 | eval Messge="No data found"] | fillnull value="No Logon Found" FirstLogin | fields FirstLogin
Hi, I would like to know more about what a zero MB / zero byte License is. What is included and what is not, in this type of license. What are the pros and cons of using such a license in a Producti... See more...
Hi, I would like to know more about what a zero MB / zero byte License is. What is included and what is not, in this type of license. What are the pros and cons of using such a license in a Production Environment will be? In our set-up we currently have a single instance Deployment Server (Search Head), Indexer, some UF's and a HF. It was proposed to link only the Indexer to our License Master and have the other splunk servers operate on a zero MB license. Will having such an architecture set-up for the licensing have an adverse affect on utilizing the full potential of Splunk Enterprise Features for the Deployment Server/Search Head, Heavy Forwarder? (I believe that it wouldn't impact much for the Universal Forwarder). More information on zero MB License would be appreciated. Thanks,