All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Below column has two values after eventstats command. i want to ignore the second events "Passed" from the column "Value". i tried Mvexpand  to spilt but i totally dont want since i cant use dedup to... See more...
Below column has two values after eventstats command. i want to ignore the second events "Passed" from the column "Value". i tried Mvexpand  to spilt but i totally dont want since i cant use dedup to remove duplicates    
We are currently using "Microsoft Office 365 Reporting Add-on for Splunk" in Splunk Cloud. https://splunkbase.splunk.com/app/3720/ According to the "Upgrade Readiness App", this Add-on does not s... See more...
We are currently using "Microsoft Office 365 Reporting Add-on for Splunk" in Splunk Cloud. https://splunkbase.splunk.com/app/3720/ According to the "Upgrade Readiness App", this Add-on does not support jQuery 3.5 yet. Does anyone know if there is a plan to support it? Or are there any other Add-ons that can retrieve o365 email logs? Thank you.
I have an SHC and I am using an SHC Deployer to deploy apps to it. Those apps include Splunk ES which is very large. The latest 7.0.0 update for ES may be so large that it causes errors when I apply ... See more...
I have an SHC and I am using an SHC Deployer to deploy apps to it. Those apps include Splunk ES which is very large. The latest 7.0.0 update for ES may be so large that it causes errors when I apply the cluster bundle. I have tried increasing max_content_length in [httpServer] in server.conf, this did not work. What configs can I change to increase this limit so I can use my SHC Deployer? It seems odd that I can't deploy an unmodified ES installation according to the documentation because it's beyond Splunk's limits.      splunk apply shcluster-bundle -target xxx -auth xxx Error while deploying apps to target=xxx with members=x: Error while updating app=Splunk_SA_Scientific_Python_Linux_x86_64 on target=xxx: Non-200/201 status_code=413; {"messages":[{"type":"ERROR","text":"Content-Length of 2147484959 too large (maximum is 2147483648)"}]},...repeated for each member        
Hi, I'm new to Splunk and I would like to get top errors on a table, but the external API returns a stack tracing making it difficult to work on it. I'm trying to make a regex that groups those err... See more...
Hi, I'm new to Splunk and I would like to get top errors on a table, but the external API returns a stack tracing making it difficult to work on it. I'm trying to make a regex that groups those errors by error code "*Return code: [<code>]*"( letting it work with other errors), than count it. Could you help me, please? 🥺
Hello Splunkers -  I am trying to filter any value that is wrapped in $, such as $host$or $value$.  I thought the below would work, but it is not.  Can someone point out what I am doing wrong?  Th... See more...
Hello Splunkers -  I am trying to filter any value that is wrapped in $, such as $host$or $value$.  I thought the below would work, but it is not.  Can someone point out what I am doing wrong?  Thanks! | eval dollar_sign=if(host_value=="$host$" OR host_value=="$value$", "yes", "no") | search NOT dollar_sign=yes
In the following log entry as "_raw": "OPTIONS /nnrf-nfm/v1 HTTP/2.0" 405 173 "-" "gmlc-http-client/2.0" "-"   I have successful rex for the "405" error field location and "173" error field locati... See more...
In the following log entry as "_raw": "OPTIONS /nnrf-nfm/v1 HTTP/2.0" 405 173 "-" "gmlc-http-client/2.0" "-"   I have successful rex for the "405" error field location and "173" error field location. I would like to build a rex to identify the "gmlc-http-client" section of that log entry.  (That field can show several different client types between those quotes.) My rex is as follows: rex field=_raw "HTTP\/2\.0\"\s\d{3}\s\d{3}\s\"\-\"\s\"(?<Error3>)\"\s\"\-\"" This rex does not error, but the result comes back as null/blank.  
Hi all, I installed the Splunk CIM on my Splunk instance and I've a doubt regarding tags whitelisting. The docs says that (https://docs.splunk.com/Documentation/Splunk/8.2.4/Knowledge/Designdatam... See more...
Hi all, I installed the Splunk CIM on my Splunk instance and I've a doubt regarding tags whitelisting. The docs says that (https://docs.splunk.com/Documentation/Splunk/8.2.4/Knowledge/Designdatamodelobjects This means that the tags whitelist configuration in Splunk CIM settings must have at least tags used within the constraints used in the specific datamodel. Let's do an example with Authentication datamodel. This is the default tags whitelist configuration after installing the app: And this is the root dataset constraint: How you can see, the tag authentication used as root constraint isn't by default one of whitelisted tags for Authentication datamodel. Shall I add tags used inside constraints on my own? Or is there something I'm missing? Thanks a lot
universal forwarder setup wizard ended prematurely 8.2.4  
I'm having an issue on my SHC, running a simple stats count by _time for any particular index, the _time comes through with my preferred time zone set in my preferences, when I go to export it to a C... See more...
I'm having an issue on my SHC, running a simple stats count by _time for any particular index, the _time comes through with my preferred time zone set in my preferences, when I go to export it to a CSV file it reverts back to local time zone. Anyone know why this would be happening?
Hi, I'm planning on using LDAP user authentication for a mid size Splunk Enterprise environment. Reading through the splunk documentation Im getting confuse don what is the minimum required informa... See more...
Hi, I'm planning on using LDAP user authentication for a mid size Splunk Enterprise environment. Reading through the splunk documentation Im getting confuse don what is the minimum required information needed to set it up.  Config files vs Splunk Web Is the the bindDN and bindDN password required or just optional? Is the set up through Splunk Web different from set up by config files? Cause i do see differences in the optional fields  LDAPS If I setup LDAPS meaning using SSL, what are the steps that need to be done? Do I need to use certificates? and where do these need to be placed?  Thank you Oj.
message from "/opt/splunk/etc/apps/Splunk_TA_nix/bin/vmstat.sh" dmesg: read kernel buffer failed: Operation not permitted
Ideally, JOB should start with Status as either RUNNING or STARTJOB or maybe both and it can end with either status as Termination, failure, Inactive, or success. Examples: [11/27/2021 08:00:00] TE... See more...
Ideally, JOB should start with Status as either RUNNING or STARTJOB or maybe both and it can end with either status as Termination, failure, Inactive, or success. Examples: [11/27/2021 08:00:00] TEST EVENT: STARTJOB JOB: A [11/27/2021 08:00:05] TEST EVENT: CHANGE_STATUS STATUS: RUNNING JOB: A [11/27/2021 08:06:23] TEST EVENT: CHANGE_STATUS STATUS: SUCCESS JOB: A [11/28/2021 08:00:00] TEST EVENT: STARTJOB JOB: A [11/28/2021 08:00:05] TEST EVENT: CHANGE_STATUS STATUS: RUNNING JOB: A [11/28/2021 08:00:05] TEST EVENT: CHANGE_STATUS STATUS: FAILURE JOB: A [11/28/2021 09:06:23] TEST EVENT: CHANGE_STATUS STATUS: SUCCESS JOB: A [11/26/2021 08:00:05] TEST EVENT: CHANGE_STATUS STATUS: RUNNING JOB: B [11/26/2021 20:06:23] TEST EVENT: CHANGE_STATUS STATUS: FAILURE JOB: B [11/25/2021 08:00:00] TEST EVENT: STARTJOB JOB: C [11/25/2021 20:06:23] TEST EVENT: CHANGE_STATUS STATUS: INACTIVE JOB: C I have more than 1400 jobs, some are running daily, some monthly, and some quarterly. In this scenario, Ultimately I am looking to calculate the last 90 days' average of duration (job end time - job start time) but somehow events are not getting properly grouped.   Below is the query currently I am using:-  Example 1 | eval end=case(_raw LIKE "%INACTIVE%","FAIL", _raw LIKE "%TERMINAT%","FAIL", _raw LIKE "%FAILURE%","FAIL" ,_raw LIKE "%SUCCESS%","FAIL",true(),"NA") | reverse | sort - _time limit=0 | transaction JOB startswith="*STARTJOB* AND *RUNNING*" endswith="end="FAIL"" keeporphans=true maxevents=9999999 keepevicted=true Example 2 | transaction JOB startswith=(STATUS="RUNNING" OR STATUS="STARTJOB") endswith=(*TERMINATED*) OR (*FAILURE*) OR (*INACTIVE*) OR (*SUCCESS*) keeporphans=true maxevents=9999999 keepevicted=true Any help would be highly appreciated. Thanks in advance.  
Hi,   I have splunk security essentials app v3.3.2 installed in my splunk enterprise, but could not find post-filter macro related to it such as `detect_mimikatz_using_loaded_images_filter`. Kindly... See more...
Hi,   I have splunk security essentials app v3.3.2 installed in my splunk enterprise, but could not find post-filter macro related to it such as `detect_mimikatz_using_loaded_images_filter`. Kindly help
Using the app user interface introduced in 5.1.0.70187, when running the app from this interface, it fails saving state. This is a new feature in 5.1 which does not exist in 5.0. When running the ap... See more...
Using the app user interface introduced in 5.1.0.70187, when running the app from this interface, it fails saving state. This is a new feature in 5.1 which does not exist in 5.0. When running the app from an event, there is no issue. Looking at the state directory, there are temporary files created in the directory for each failed execution. Note: there is a permission error chmod_func(dst, stat.S_IMODE(st.st_mode))\r\nPermissionError: [Errno 1] likely the root cause of the issue. {"identifier": "retrieve flows", "result_data": [{"data": [], "extra_data": [], "summary": {}, "status": "failed", "message": "Could not retrieve Tenants(Domains)", "parameter": {"timespan": 60, "start_time": "2022-01-24T15:30:00Z", "record_limit": 2000, "malicious_ip": "192.168.200.50"}, "context": {}}], "result_summary": {"total_objects": 1, "total_objects_successful": 0}, "status": "failed", "message": "Exception Occurred. [Errno 1] Operation not permitted: '/opt/phantom/local_data/app_states/eac976c5-c8d7-4b77-9fdd-52bab068679c/9_state.json'.\r\nTraceback (most recent call last):\r\n File \"../pylib/phantom/base_connector.py\", line 3252, in _handle_action\r\n File \"/opt/phantom/apps/ciscosecurenetworkanalytics_eac976c5-c8d7-4b77-9fdd-52bab068679c/ciscosecurenetworkanalytics_connector.py\", line 458, in finalize\r\n self.save_state(self._state)\r\n File \"../pylib/phantom/base_connector.py\", line 2978, in save_state\r\n File \"/opt/phantom/usr/python36/lib/python3.6/shutil.py\", line 246, in copy\r\n copymode(src, dst, follow_symlinks=follow_symlinks)\r\n File \"/opt/phantom/usr/python36/lib/python3.6/shutil.py\", line 144, in copymode\r\n chmod_func(dst, stat.S_IMODE(st.st_mode))\r\nPermissionError: [Errno 1] Operation not permitted: '/opt/phantom/local_data/app_states/eac976c5-c8d7-4b77-9fdd-52bab068679c/9_state.json'", "exception_occured": true, "action_cancelled": false} Traceback (most recent call last):   File "ciscosecurenetworkanalytics_eac976c5-c8d7-4b77-9fdd-52bab068679c/eac976c5-c8d7-4b77-9fdd-52bab068679c_2022_01_25_02_38_01.py", line 32, in <module> raise Exception("Action Failed") Exception: Action Failed   There is no issue with the code in 5.0.1.66250 as this user interface does not exist. The code functions fine from the shell/CLI or from the events screen.
I have installed the TA for Digital Guardian how I configure the input is the data export configured on Digital Guardian side , if yes , how  ?
Hello All, I have a simple search for the alert: Index="vpn" message="recently failed" |table _time, host,message Alert triggers when results are >2 I need to put all events field's results in t... See more...
Hello All, I have a simple search for the alert: Index="vpn" message="recently failed" |table _time, host,message Alert triggers when results are >2 I need to put all events field's results in the ServiceNow ticket description. Unfortunately, $results.fieldname$ take results of the first event. But this alert requires to have >2 events. Are there any options to manage it with multiple events? Thank you in advance!
Hello, i have added a Multiselect field to my dashboard and i am using a search to fill it with values, One of this values is "Not tested" and if i select this value in multiselect field i do  no... See more...
Hello, i have added a Multiselect field to my dashboard and i am using a search to fill it with values, One of this values is "Not tested" and if i select this value in multiselect field i do  not get search results. All other values like "A" , "A+", "***" are working but no values whitespaces. code for use of the nultivaluefield in search to filter a table:         | search grade IN ($ms_X9Rhybia$)         I think the reason is the handling of the multiselect field with whitespace in $ms_X9Rhybia$ (wrong detection of whitspace as delimiter for example)
Hi, There is some host which is reporting to Splunk with a different sourcetype. We want to filter all the host which is only reporting for XYZ sourcetype. And host needs to be shown if it's reporti... See more...
Hi, There is some host which is reporting to Splunk with a different sourcetype. We want to filter all the host which is only reporting for XYZ sourcetype. And host needs to be shown if it's reporting for XYZ sourcetype along with any other sourcetype. could you please help us on this query.
Hello guys, I am fairly new to splunk, and i wish to create a system where i can extract unique client ips from our org's logs from splunk send them to a third party's rest api for reputation checks ... See more...
Hello guys, I am fairly new to splunk, and i wish to create a system where i can extract unique client ips from our org's logs from splunk send them to a third party's rest api for reputation checks and based on the respone raise alerts on splunk. What would be best way to do this? I tried experimenting with add-on builder and app creation but i couldn't go further is there a way i create a custom  app to access logs send them and raise alerts based on response? Thanks in advance.
This is give me data in integers, I want calculate percentages. How can we do it? | savedsearch cbp_inc_base | eval _time=strftime(opened_time, "%Y/%m/%d") | bin _time span=1d | chart count(inc... See more...
This is give me data in integers, I want calculate percentages. How can we do it? | savedsearch cbp_inc_base | eval _time=strftime(opened_time, "%Y/%m/%d") | bin _time span=1d | chart count(incident_number) as IncidentCount over _time by hasAppBlueprints