All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I need some help for understanding this problem : sometime, in alert_data_results'logs, i have two alert in one log. Exemple :   Alert Manager ver 3.0.5 Any ideas from where this p... See more...
Hi, I need some help for understanding this problem : sometime, in alert_data_results'logs, i have two alert in one log. Exemple :   Alert Manager ver 3.0.5 Any ideas from where this problem origin ? Thanks !
Hi all, I am trying to create a button in the dashboard. By pressing the button, the selected look up file can be removed. I have found one command line (see below) to do this. How can I apply this... See more...
Hi all, I am trying to create a button in the dashboard. By pressing the button, the selected look up file can be removed. I have found one command line (see below) to do this. How can I apply this command line in javascript or dashboard XML file.  command line: curl -k -u admin:pass --request DELETE https://localhost:8089/servicesNS/admin/search/data/lookup-table-files/remove_test.csv   Thank you!  
I have a system X that sends syslog to a Splunk HF which then sends to Splunk Cloud. The syslog contains the same data in the fields msg and desc, so I'd like to remove the field desc in Splunk HF b... See more...
I have a system X that sends syslog to a Splunk HF which then sends to Splunk Cloud. The syslog contains the same data in the fields msg and desc, so I'd like to remove the field desc in Splunk HF before sending the syslog. How can I do that? I thought about using transforms.conf and props.conf (https://docs.splunk.com/Documentation/Splunk/7.0.3/Forwarding/Routeandfilterdatad#Discard_specific_events_and_keep_the_rest), but this is used for dropping the entire log.
I am using Microsoft SQL Server app from SOAR to connect one Microsoft SQL Server. But getting below error while checking the connection.  App 'Microsoft SQL Server' started successfully (id: 16608... See more...
I am using Microsoft SQL Server app from SOAR to connect one Microsoft SQL Server. But getting below error while checking the connection.  App 'Microsoft SQL Server' started successfully (id: 1660812594017) on asset: 'efi_test'(id: 104) Loaded action execution configuration Error authenticating with database (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (myserver.com)\n') 1 action failed Unable to connect to host: myserver.com     I am able to connect the server via SSMS using provided credentials.
I really like this data manager app.. does anyone knows when AWS VPC flow input will be included in data manager app of splunk Cloud?  
Please help answer this question, thank you: All the host values of the data I access now are the host names. I want to change all the host names to IP. How can I do this? For example, host = Spl... See more...
Please help answer this question, thank you: All the host values of the data I access now are the host names. I want to change all the host names to IP. How can I do this? For example, host = Splunk is changed to host = 192.168.3.1 There are many hosts to be modified, so is there any configuration to be pushed uniformly by the deployment server?
Hi All, We use the latest Splunk App for Jenkins and the latest Splunk plugins. Splunk App for Jenkins: 2.0.4 version  (https://splunkbase.splunk.com/app/3332/#/overview) Splunk plugin for Jenk... See more...
Hi All, We use the latest Splunk App for Jenkins and the latest Splunk plugins. Splunk App for Jenkins: 2.0.4 version  (https://splunkbase.splunk.com/app/3332/#/overview) Splunk plugin for Jenkins: Version: 1.10.0 (https://plugins.jenkins.io/splunk-devops/) Splunk Enterprise version: 9.0.0 Jenkins version: 2.346.2 When I go to Build Analysis - - Job Stage Pipeline I get:  No pipeline data. as below: Can you please let me know what is required to get the Job Stage Pipeline? I am able to see other items, for example(Logs and Artifacts): and there are more details about my setting on Splunk and Jenkins plugin: (should I change the index of my HEC?) (I only use IP as my host and hostname) Thanks AsherRTK
In have table having Field Source IP , Destination IP  I want to access them for drilldown i tried $row.Source IP$ and $row. Destination IP $ Note : i dont want to change table label to  Source... See more...
In have table having Field Source IP , Destination IP  I want to access them for drilldown i tried $row.Source IP$ and $row. Destination IP $ Note : i dont want to change table label to  Source_IP  
Hi There, We set up SAML with ADFS for one of the clients 3 years ago. In the client's ADFS setup, I found that the Splunk certificate is expired (SAML Splunk metadata). I tried to give them the ne... See more...
Hi There, We set up SAML with ADFS for one of the clients 3 years ago. In the client's ADFS setup, I found that the Splunk certificate is expired (SAML Splunk metadata). I tried to give them the new certificate from the latest SAML metadata it didn't let users log in. I am confused, as to how login is still happening for the users if Splunk's certificate is expired in ADFS. Also, what can be done so that the Splunk certificate in ADFS is renewed? which certificate is used for a handshake in SAML ADFS Regards, Shikha
Hi Team, I used Linux monitoring extension to monitor NFS mount point. There are multiple metrics available in extension. According to metric yaml file code I should get details of used % available ... See more...
Hi Team, I used Linux monitoring extension to monitor NFS mount point. There are multiple metrics available in extension. According to metric yaml file code I should get details of used % available % and availability metric but I am only getting availablity metric under mountednfs metric. Is there anyone who has used this extension?
Hello, I have a search query that have a |outputlookup report.csv at the end, and save that as an alert to run daily. But when I want to check it using |inputlookup report.csv, it's found no resu... See more...
Hello, I have a search query that have a |outputlookup report.csv at the end, and save that as an alert to run daily. But when I want to check it using |inputlookup report.csv, it's found no result. I check the job inspection of the alert and found the csv was ouputed at splunk/var/run/splunk/csv Can't I read file in this directory, or should I use alert action to output csv?
I wrote a screen scraping script on a server running Splunk Forwarder version 8.2.3. The script is in a file and runs find from the linux command line. It was developed with Python 3.6. I added the s... See more...
I wrote a screen scraping script on a server running Splunk Forwarder version 8.2.3. The script is in a file and runs find from the linux command line. It was developed with Python 3.6. I added the script to our Universal Forwarder local inputs.conf and I can see the script is scheduled successfully. However, it consistently fails with a message in the splunkd.log. I have no idea why the message references python3.7. It is not installed anywhere on the system 3.6 is installed in /usr/bin/python3.6. I tried changing the server.conf properties in local by adding "python.version = python3" in the [general] section and restarting, but to no avail. Please advise what else I might try. Thanks. in advance. 08-18-2022 00:19:45.525 +0000 ERROR ExecProcessor [3423479 ExecProcessor] - message from "python3.7 /opt/splunk/sjcinf8469vmw15/splunkforwarder-8.2.3/splunkforwarder/bin/scripts/scrapeGmrPage.py" /bin/sh: python3.7: command not found
Hello, I have created splunk AWS  Linux instance Instance and installed splunk enterprise on it. The install was successful and I am able to reach the login page for splunk web. When logging in, I ... See more...
Hello, I have created splunk AWS  Linux instance Instance and installed splunk enterprise on it. The install was successful and I am able to reach the login page for splunk web. When logging in, I get an error at the bottom of the page that says "Server Error". All of the necessary ports are open on the server.  I have attached a photo of the Error and the WARN I received in the splunkd logs after each failed login attempt. Any suggestions on a fix would be highly appreciated.  
I have a modular input to write to Splunk using event = Event() event.data = json.dumps(data) ew.write_event(event) This all works fine except that in some event.data records there is a detai... See more...
I have a modular input to write to Splunk using event = Event() event.data = json.dumps(data) ew.write_event(event) This all works fine except that in some event.data records there is a detail field that also contains data in json format which is written to Splunk as a string. How do I perform field extraction and index the data contained in the one detail field which is json within json?
 so it says Could not load lookup=LOOKUP-itsi_kpi_attributes   looking around find there are pointers i think. but if i click on lookup table files and filter for  itsi_kpi_attribu... See more...
 so it says Could not load lookup=LOOKUP-itsi_kpi_attributes   looking around find there are pointers i think. but if i click on lookup table files and filter for  itsi_kpi_attributes  i get ---> wait last one  If this says none what's worse this says none or no csv even though I see some pointer to a field that doesn't exist.   if I am right where is this file? if not # is what I am asking making sense #2 is there something else you'd like to see if so tell me where it is thanks.  
Hi Every one, Is it possible to modify a portion of CSV file in inputlookup? Cheers.
Hello! I am making a search that searches data from Sunday-Tuesday one week and Sunday-Wednesday the next week.  I have a field called "date_wday" which contains data values representing the da... See more...
Hello! I am making a search that searches data from Sunday-Tuesday one week and Sunday-Wednesday the next week.  I have a field called "date_wday" which contains data values representing the day of the week (sunday, monday, tuesday, .....,saturday). My search: index=blah date_wday=monday OR tuesday OR sunday My search successfully filters the data to pull from Sunday-Tuesday but how can I also have it add data from every other Wednesday? ex. Week 1: Sunday-Tuesday        Week 2: Sunday-Wednesday        Week 3: Sunday-Tuesday        ...... Thank you
Hey everyone, I recently installed the BMC Remedy Add On for Splunk and followed the directions to get setup.  I successfully connected into BMC via REST credentials, setup the remedy_fields.conf fi... See more...
Hey everyone, I recently installed the BMC Remedy Add On for Splunk and followed the directions to get setup.  I successfully connected into BMC via REST credentials, setup the remedy_fields.conf file and successfully created a ticket via search and the remedyincidentcreatestreamrest command.  My problem is automating this experience.  I created an alert based on a search (per the docs), and specified the "Remedy Incident Integration using REST API" trigger.  Looking at the splunk_ta_remedy_rest_alert.log file I see the following authentication error: 2022-08-17 15:07:35,356 ERROR pid=11181 tid=MainThread file=remedy_helper.py:create_incident:287 | Authentication failed, status_code=401, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Authentication failed","messageNumber":623,"messageAppendedText":"remedy_user"}] 2022-08-17 15:07:35,657 INFO pid=11181 tid=MainThread file=remedy_helper.py:create_jwt_token:162 | Successfully generated a new jwt token 2022-08-17 15:07:36,030 ERROR pid=11181 tid=MainThread file=remedy_helper.py:create_incident:287 | Error occured, status_code=400, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Required field cannot be blank.","messageNumber":326,"messageAppendedText":"HPD:Help Desk : Contact Company"}] 2022-08-17 15:07:36,030 ERROR pid=11181 tid=MainThread file=remedy_incident_rest_alert_base.py:post_incident:227 | [Remedy Incident REST Alert] The search name: Ingress to ICM Missing DN. Failed to Create/Update incident Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 432, in retry return func(account_info, *arg, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 288, in create_incident raise Exception(msg) Exception: Authentication failed, status_code=401, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Authentication failed","messageNumber":623,"messageAppendedText":"remedy_user"}] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_incident_rest_alert_base.py", line 200, in post_incident proxy_config=self.proxy_config, File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 454, in retry return func(account_info, *arg, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 288, in create_incident raise Exception(msg) Exception: Error occured, status_code=400, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Required field cannot be blank.","messageNumber":326,"messageAppendedText":"HPD:Help Desk : Contact Company"}]   I have a separate application creating tickets via REST and was told to use the  HPD:IncidentInterface_Create.  Not sure what the difference is (if any) to running a search is as opposed to having an alert trigger it but I am stumped.  If anyone can offer some insight I would appreciate it. Thanks! Chad
Hello,  We are using splunk cloud to centralize all our logs, and are currently struggling with Bitdefenders implementation. We have added the HTTP Event Collector, and are now struggling with the... See more...
Hello,  We are using splunk cloud to centralize all our logs, and are currently struggling with Bitdefenders implementation. We have added the HTTP Event Collector, and are now struggling with the final step of sending the logs from Bitdefender to Splunk, When i run the code to connect the two     curl -k -X POST OUR_GRAVITYZONE_API/v1.0/jsonrpc/push -H 'authorization: Basic GRAVITYZONE_API_KEY' -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{ "params": { "status": 1, "serviceType": "splunk", "serviceSettings": { "url": "https://input-OUR_SPLUNK_CLOUD_LINK:8088/services/collector", "requireValidSslCertificate": false, "splunkAuthorization": "Splunk HTTP_EVENT_KEY" }, "subscribeToEventTypes": { "hwid-change": true, "modules": true, "sva": true, "registration": true, "supa-update-status": true, "av": true, "aph": true, "fw": true, "avc": true, "uc": true, "dp": true, "device-control": true, "sva-load": true, "task-status": true, "exchange-malware": true, "network-sandboxing": true, "malware-outbreak": true, "adcloud": true, "exchange-user-credentials": true, "exchange-organization-info": true, "hd": true, "antiexploit": true }, "jsonrpc": "2.0", "method": "setPushEventSettings", "id": "1" }' }      It returns the Error     { "id": null, "jsonrpc": "2.0", "error": { "code": -32600, "message": "Invalid Request", "data": { "details": "Invalid or missing request id. Notifications are not supported" } } }   Are there any fixes that we could do to forward our logs from Gravityzone into Splunk Cloud?
Recently upgraded (on prem) from 8.2.6 to 9.0.0.1 and now getting this message on my dashboards:  "This dashboard version is missing. Update the dashboard version in source." (The "Learn more" link... See more...
Recently upgraded (on prem) from 8.2.6 to 9.0.0.1 and now getting this message on my dashboards:  "This dashboard version is missing. Update the dashboard version in source." (The "Learn more" link following it fails, like they always have...wish they'd fix that.) One other person that asked this was told to read up on the jQuery 3.5 upgrade -- but that makes no sense, and the documentation didn't tell me what to do. I have to assume it's simply looking for a tag in the xml code, but what and where? thanks!