All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers! I have an event A from source A and event B from source B. I need an alert when event B occurs without event A... Is this feasible? Could you please help me or post some suggesti... See more...
Hello Splunkers! I have an event A from source A and event B from source B. I need an alert when event B occurs without event A... Is this feasible? Could you please help me or post some suggestions? Thanks in Advance!
Hello all ! I did read all the missing entities cases here, but mine is slightly different. I am trying to feed metrics from Unix Nmon into SAI and those metrics are coming into metrics index. ... See more...
Hello all ! I did read all the missing entities cases here, but mine is slightly different. I am trying to feed metrics from Unix Nmon into SAI and those metrics are coming into metrics index. I configured needed parameters: _meta = entity_type::nix_host in inputs.conf nmon-metrics index added into SAI indexes macro sourcetype = em_metrics is host assignment (MetaData:Host) - it is done my Nmon's TRANSFORMS-hostfield=nmon_metrics_csv_hostoverride Windows entities (perfmon config for metrics) appeared in SAI, but not Unix ones. I am attaching a few screenshots to compare what metrics coming from Windows and Unix - my suspicion is: SAI expects certain metric naming convention SAI expects additional mandatory dimensions (other than entity_type::nix_host) or something else Any help will be appreciated.
I am using a Python script to send data to Splunk via HEC. There's no problem when curling a simple "Hello World". However, I would like to curl search results (json format) obtained via a Python s... See more...
I am using a Python script to send data to Splunk via HEC. There's no problem when curling a simple "Hello World". However, I would like to curl search results (json format) obtained via a Python script. Here's how the snippet of the Python script to get the results in json format. The results look like this: {'_id': {'$oid': '5ec4f96e67ac75656af5ea5b'}, 'created_at': '2020-05-10T09:33:33.490855', 'appid_caller': 'fg67k78k-7f44-5c90-a1b6-42gf5jjjj00a', 'input': {'target_host': 'portal-azure.cloud.io', 'target_port': 443}, 'output': {'result': False, 'info': 'Application has failed security checks. Drill down the results [array] to find information.', 'results': [{'category': 'hosting', 'result': False, 'title': 'Insecure use of shared hosting subdomain', 'description': "The application uses shared hosting parent domain. Recommended to use (e.g.: *.abc.com, *.abc.cloud, etc).", 'cwe': 348, 'checks': []}]}} Question is: how do I curl the results (server_info) using HEC? I'm getting an error 400. I'm guessing the problem lies with the 'data' variable where it may not be defined properly. Also, I've tried the endpoints; services/collector and services/collector/event but none worked. When using services/collector/raw endpoint, I did get a response code 200 which indicated successful but jargon data was displayed in Splunk. Below is the post script used. splunk_headers = {'Authorization': 'Splunk f5t34545-xxxxxc-xxxx-xxxx-xxxx-xxxxxxxx'} data = {"sourcetype": "server","event": server_info} response = requests.post('https://server03.na.abc.com:8088/services/collector/event', headers=splunk_headers, data=data, verify=False) Thank you.
Greetings!!! Anyone may help me on how i can fix the below issue, I have been installed the new license BUT before i had the valid trial license which was still valid, then i have installed the n... See more...
Greetings!!! Anyone may help me on how i can fix the below issue, I have been installed the new license BUT before i had the valid trial license which was still valid, then i have installed the new license, then after i got the below error that tells me: (((((( Licensing alerts notify you of excessive indexing warnings and licensing misconfigurations. )))) I have seen that after installing new one and I have excessive because of two valid license, then i removed one valid license for trial. Kindly let me know if I did well by removing the license trial after getting the above message in parenthesis or i might not deleted it? or I still miss other thing to fix all problem? kindly advise me to this matter and the way i could fix the issue. Thank you in advance!
About splunk mothership App its features and how to use it and configure it and purpose of the app
Hi experts, I try to restart our splunk server, but its not start. Earlier I try to start from UI, but it not start. I also try to reboot if using CLI, but dont see any thing on console ... See more...
Hi experts, I try to restart our splunk server, but its not start. Earlier I try to start from UI, but it not start. I also try to reboot if using CLI, but dont see any thing on console I am using Splunk 7.2 in AWS EC2 instance (Amazon 1) , I am using splunk on that environment from last one year. $SPLUNK_HOME/bin/splunk -version $SPLUNK_HOME/bin/splunk -version Splunk 7.2.6 (build c0bf0f679ce9) uname -a Linux abcdXyz 4.14.123-86.109.amzn1.x86_64 #1 SMP Mon Jun 10 19:44:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux strace /opt/splunk/bin/splunk start execve("/opt/splunk/bin/splunk", ["/opt/splunk/bin/splunk", "start"], [/ 50 vars /]) = -1 ENOEXEC (Exec format error) write(2, "strace: exec: Exec format error\n", 32strace: exec: Exec format error ) = 32 exit_group(1) = ? +++ exited with 1 +++
Hi, I am needing to pull multiple fields from a lookup CSV into the results from a proxy search Primary search is: index=PROXY domain=example.com | transaction user maxspan=1m | stats c... See more...
Hi, I am needing to pull multiple fields from a lookup CSV into the results from a proxy search Primary search is: index=PROXY domain=example.com | transaction user maxspan=1m | stats count by user This gives me user - count SURNAME, FIRSTNAME - X(count) Next I have a lookup CSV containing an AD dump that I want to enrich the first search, *note the Nickname field follows the same format as the user field from the proxy results | fields user, Branch, Group, count | lookup AD_all_users.csv Nickname as user, Dep_Branch as Branch, Dep_Group as Group however when I run these searches together we get index=PROXY domain=example.com | transaction user maxspan=1m | stats count by user | fields user, Branch, Group, count | lookup AD_all_users.csv Nickname as user, Dep_Branch as Branch, Dep_Group as Group User - Branch - Group - count SURNAME,FIRSTNAME - NULL - NULL - X(count) anyone able to advise me of wat I have wrong? PS the lookup CSV has about 30 columns and I only need the 3
Greetings!! how to check if your license has been well installed? thank you in advanced.
TL;DR - the python scripts runing from the $SPLUNK_HOME$/bin/splunk cmd python throw errors that aren't thrown using /usr/bin/python. So, I've got a load of data which has X/Y coordinates stored... See more...
TL;DR - the python scripts runing from the $SPLUNK_HOME$/bin/splunk cmd python throw errors that aren't thrown using /usr/bin/python. So, I've got a load of data which has X/Y coordinates stored in NZTM2000 format, and I want to enrich the data by converting to a Lat/Lng so that I can plot onto a map visualisation, or output to other systems that use this format. I've created an app (replaced with in the examples below), and dropped a Python script file in there as a proof of concept. The Python script I'm using to convert requires a couple of additional libraries (installed using "pip install pyproj") which I installed as the Splunk user (who's home directory is also set to $SPLUNK_HOME$). The problem is that the script works fine from the command line: splunk@standalone-vm:~/etc/apps/<appname>/bin$ /usr/bin/python /opt/splunk/etc/apps/<appname>/bin/nztm_to_lat_lng.py (-40.981429307189785, 174.95992442484228) However, if I try running it using the Splunk binary route, I get an error.... splunk@standalone-vm:~/etc/apps/<appname>/bin$ /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/<appname>/bin/nztm_to_lat_lng.py Traceback (most recent call last): File "/opt/splunk/etc/apps/<appname>/bin/nztm_to_lat_lng.py", line 14, in <module> from pyproj import Proj, transform File "/opt/splunk/.local/lib/python2.7/site-packages/pyproj/__init__.py", line 69, in <module> from pyproj._list import ( # noqa: F401 ImportError: /opt/splunk/.local/lib/python2.7/site-packages/pyproj/_list.so: undefined symbol: PyUnicodeUCS4_FromStringAndSize So I thought it might be something to do with the 2.7 version, but it seems not: splunk@standalone-vm:~/etc/apps/<appname>/bin$ /usr/bin/python --version Python 2.7.17 splunk@standalone-vm:~/etc/apps/<appname>/bin$ /opt/splunk/bin/splunk cmd python --version Python 2.7.17 Which seems to rule that out. So, I tried a different route: splunk@standalone-vm:~/etc/apps/<appname>/bin$ /opt/splunk/bin/splunk cmd /usr/bin/python /opt/splunk/etc/apps/<appname>/bin/nztm_to_lat_lng.py (-40.981429307189785, 174.95992442484228) So if I get Splunk to call the /usr/bin/python route, everything works as expected. So this points to a problem with the shipped version of python shipped with Splunk, but that seems.... Unlikely. So, help, please? Appendix: Here's a copy of the python file I'm trying to run (Note: you'll need the pyproj libary too...) #!/usr/bin/env python """nztm_to_lat_lng.py: Takes an NZTM2000 X/Y coordinate system and outputs a lat/lng""" __author__ = "Phil Tanner" __version__ = "0.0.1" __status__ = "Prototype" # If we're running from Splunk command, patch our local path so we can load modules; hack! import os splunk_home = os.getenv('SPLUNK_HOME') if splunk_home is not None: import sys # Local app path sys.path.append(os.path.join(os.environ['SPLUNK_HOME'],'etc','apps','<appname>','bin')) # Splunk user path - note this is fixed to the 2.7 to pull in the pip installed pyproj files, without which it fails horribly sys.path.append(os.path.join(os.environ['SPLUNK_HOME'],'.local','lib','python2.7','site-packages')) from pyproj import Proj, transform inProj = Proj("+proj=tmerc +lat_0=0 +lon_0=173 +k=0.9996 +x_0=1600000 +y_0=10000000 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs") outProj = Proj('epsg:4326') # Note: hardcoded sample, literally chosen at random! x1,y1 = 1764883.4618,5461454.6101 # Convert between NZTM & Lat/Lng x2,y2 = transform(inProj,outProj,x1,y1) print (x2,y2) And for reference, here's the Linux version I'm running the server on (which is the standard MS Azure Splunk instance): splunk@standalone-vm:~/etc/apps/<appname>/bin$ uname -a Linux standalone-vm 4.15.0-96-generic #97-Ubuntu SMP Wed Apr 1 03:25:46 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Hello everyone, I just want to use append instead of a join. My code is index="yut" sourcetype="test" cd IN(*) level="rrr" severity="error" | eval evtdate = strftime(_time,"%Y-%m-%d") | ... See more...
Hello everyone, I just want to use append instead of a join. My code is index="yut" sourcetype="test" cd IN(*) level="rrr" severity="error" | eval evtdate = strftime(_time,"%Y-%m-%d") | eval CD=cd | eval key=_time + url | table key ------(this query is giveing me 504 key) | append ----------------( want to use here append) [ search index="yut" sourcetype="test" cd IN (*) severity="error" | eval CD=cd | eval evtdatein = strftime(_time,"%Y-%m-%d") | sort by _time desc | dedup url | eval newkey=_time + url | table newkey] --------------(this query is giving me 6 newkeys) | eventstats values(newkey) as UniqueKeys | search key IN UniqueKeys want to add all 504 +6 = 510 key and remove the duplicate key from the 504
In the cluster of ES, members of cluster randomly have get this error: Search Head Clustering Service Not Ready Please wait,the status of your search head cluster is not ready Service ready flag : ... See more...
In the cluster of ES, members of cluster randomly have get this error: Search Head Clustering Service Not Ready Please wait,the status of your search head cluster is not ready Service ready flag : false Rolling restart in progress: false and then it remediate itself and comes back, and the problem happen again and again ....
for mcafee integration i need to install dbconnect but heavy forwarder is required or how we can send logs to splunk
Has anyone seen this error before? <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Error while deploying apps to first member, aborting apps deployment to all ... See more...
Has anyone seen this error before? <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=test1 on target=https://xx.xx.xx.xx:8089: Non-200/201 status_code=400; {"messages":[{"type":"ERROR","text":"Argument \"deploy\" is not supported by this handler."}]}</msg> </messages> </response> Running from the deployer , target address is the current cluster captain Splunk V7.3.5 deployer talking to V7.2.7 Search heads
I want to display the text of a column of a table in one line. After hover to it, it should show whole the description. Table ex: Id Error ... See more...
I want to display the text of a column of a table in one line. After hover to it, it should show whole the description. Table ex: Id Error Count 1 "Sorry, I didn't get that. You can go back to these teams for more inquiries: 290 I can also check for the status of a job you are monitoring (i.e. Status of the job HIST-01). Aside from this I am also able to do the following: - Search from the Document site (i.e. Document Analytics) - Find people from the People Page - Give the latest weather update of a particular city (i.e. Weather in Manila) 2 Glance task is failed. 42 3 Error found in processing the mail attachment. 29 4 Task is failed to fetch the start time 28 5 Failed 26 It should be like below: Id Error Count 1 "Sorry, I didn't get that. You can go back to these 290 2 Glance task is failed. 42 3 Error found in processing the mail attachment. 29 4 Task is failed to fetch the start time 28 5 Failed 26 If i hover to "Sorry, I didn't get that. You can go back to these" this text it should whole the text as first table data having. Is there any possible to implement like this. Please help. Thanks in Advance.
Hi, I created an action rule that will auto closed an Episode Review once it hasn't received any events for 1 hour. It is working fine but I can't figure out why it's returning multiple comments. I... See more...
Hi, I created an action rule that will auto closed an Episode Review once it hasn't received any events for 1 hour. It is working fine but I can't figure out why it's returning multiple comments. I need advise on how to fix this. Thanks in advance!
I am adjusting the SQL statement for data input. There is a guideline to use rising input mode. SELECT * FROM your_table WHERE dtime > ? ORDER BY dtime ASC How to use the checkpoint value (?) ... See more...
I am adjusting the SQL statement for data input. There is a guideline to use rising input mode. SELECT * FROM your_table WHERE dtime > ? ORDER BY dtime ASC How to use the checkpoint value (?) also as upper limit? Say ... WHERE dtime > ? AND dtime < ? + INTERVAL '1 day'
Message="Internal event: Function ldap_search entered. SID: S-1-5-18 Source IP: 127.0.0.1:25855 Operation identifier: 680571 Data1: Data2: 2796807187 Data3: Data4:" How can I just left mes... See more...
Message="Internal event: Function ldap_search entered. SID: S-1-5-18 Source IP: 127.0.0.1:25855 Operation identifier: 680571 Data1: Data2: 2796807187 Data3: Data4:" How can I just left message "Message="Internal event: Function ldap_search entered." by rex to define as fields or use eval command?
Use splunk enterprise version 7.2.3,Use the field to extract the timestamp, the time closer to the present can be identified, and the historical time cannot be identified, how to solve help!... See more...
Use splunk enterprise version 7.2.3,Use the field to extract the timestamp, the time closer to the present can be identified, and the historical time cannot be identified, how to solve help!help
Hi Splunk team, I am trying to run a command below, I need my end output as dc(totalCustomers) and dc(Customers_520Error) Query: index=*** event=test | stats dc(customerId) as totalCustom... See more...
Hi Splunk team, I am trying to run a command below, I need my end output as dc(totalCustomers) and dc(Customers_520Error) Query: index=*** event=test | stats dc(customerId) as totalCustomers| eval errortype=case(errorCode="520","error_520") | chart dc(customerId) over errorType
I am having issues with my deployment app config files. Whenever I edit the config file for my applications in my deployment apps and save it it doesn't replicate to those deployment clients. From my... See more...
I am having issues with my deployment app config files. Whenever I edit the config file for my applications in my deployment apps and save it it doesn't replicate to those deployment clients. From my understanding the clients check their md5 hash of the apps in their folder and the server compares then pushes the new config files. I am not seeing any of my changes propagate because I know that I edited the config file and added event ID 4776 to be blacklisted and I can see it in my search results. How do I fix this?