All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Greetings folks. I installed the TA-ms-teams-alert-action to... you probably guessed... send alert messages to Teams. After installation exactly two messages were sent successfully to Teams. I even ... See more...
Greetings folks. I installed the TA-ms-teams-alert-action to... you probably guessed... send alert messages to Teams. After installation exactly two messages were sent successfully to Teams. I even took screenshots. I recently realized I had not received any messages for events that I knew had happened so I started digging. Looks like a lot of messages are stuck in a resending state. Further digging in the logs indicates that when the TA tried to send a message to the Teams webhook it received a 404: 2022-04-06 00:35:45,922 ERROR pid=123018 tid=MainThread file=cim_actions.py:message:280 | sendmodaction - signature="Microsoft Teams publish to channel has failed!. url=https://totallyvalid.webhook.office.com/webhookb2/XXXXX , data={ }, HTTP Error=404, HTTP Reason=Not Found, HTTP content=<!DOCTYPE html> <span><H1>Server Error in '/WebhookB2' Application.<hr width=100% size=1 color=silver></H1> <h2> <i>The resource cannot be found.</i> </h2></span> <font face="Arial, Helvetica, Geneva, SunSans-Regular, sans-serif "> <b> Description: </b>HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. &nbsp;Please review the following URL and make sure that it is spelled correctly. <br><br> <b> Requested URL: </b>/webhookb2/XXXXX<br><br> I am unclear how to proceed. I've changed the web hook URLs above for privacy but the hooks in the logs and in the TA match the hooks in the Teams connector configuration. I know the webhooks work because they are in use by other tools and are not failing. I tested the webhooks from my laptop and was able to send a message. I tested the webhook from a search head and was able to send a message. Something appears to be munging the web hook URL but I cannot determine how or where. And since it worked previously and has not changed (I am the only person with access) I can't figure it out. I suspect that this line "Server Error in '/WebhookB2' Application." is relevant. This is on Splunk Enterprise 8.2.2.2. Thoughts or strategies would be appreciated.
The Monitoring Console is very slow, the Overview page takes five to load to report about 150 indexers. Any other page takes a couple of minutes to load. What can it be?
Hi Community, I have an SPL query that runs from a savedsearch in Splunk Enterprise. When I run the query I am able to get the output but when I try to run the same query from the Linux server using... See more...
Hi Community, I have an SPL query that runs from a savedsearch in Splunk Enterprise. When I run the query I am able to get the output but when I try to run the same query from the Linux server using a curl command I do not get any response. I have verified if the curl is able to connect to the API and obtain a response by getting the status code in the output. Example of the curl command: /usr/bin/curl -sSku username:password https://splunk:8089/servicesNS/admin/search/search/jobs/export -d search="| savedSearch Trading_test host_token=MTE_MTEDEV_Greening time_token.earliest=-30d time_token.latest=now " -d output_mode=csv -d exec_mode=oneshot > output.csv I was trying to break the problem to check where it might have gone wrong. I found that the savedsearch I was using had a table command to limit the number of columns generated. So I created a new savedsearch to without any tables and I was able to get the output as raw data. This is such unusual behaviour that I am not able to figure out what would have gone wrong. Could anyone let me know why is this causing a problem? Are there some other alternatives that I can use to fix this problem? Thanks in advance.   Regards, Pravin      
I am checking the upgrade readiness of my Splunk Apps under ES search-head.  (we have SearchHead cluster and changes are pushed from there, and I am told the changed once pushed flattens both local a... See more...
I am checking the upgrade readiness of my Splunk Apps under ES search-head.  (we have SearchHead cluster and changes are pushed from there, and I am told the changed once pushed flattens both local and default folder) While checking I also want to know if there are any custom changes made to any app. If any custom changes I need to inform the respective owner to fix it before updating the app to make it py2 and py3 compatible. Now, I don't have access to any file system. I am told by looking at btool debug logs, I can find out if any custom changes are applied to any app. I just don't know what specific in btool log will give it away if any custom change is done ? What exactly to look for in btool to know that ? Any  assistance here ?
I configured the app however it keeps returning to the setup page.  Easy fix.  Also, I have the ssl_check3.py script work fine and its pulling cert info as expected however the manual (ssl_checker2.p... See more...
I configured the app however it keeps returning to the setup page.  Easy fix.  Also, I have the ssl_check3.py script work fine and its pulling cert info as expected however the manual (ssl_checker2.py) is failing. I deployed the app via the Deployment server so there is no "local" folder, so no local/ssl.conf either.  I looked at the ssl_check2.py and it looks like its also looking for defaul/ssl.conf however when I manually run the script it returns the error " No such file or directory: '/opt/splunk/etc/apps/ssl_checker/bin/../local/ssl.conf' ".   I tried, just for testing, to create a local/ssl.conf and it returned this error " 'str' object has no attribute 'decode' ".  It also created an empty ssl.conf_tmp in local which I assume is a result of the above error?
Background information In our system, every visit consist of one or more actions. Every action has its name and in Splunk it's a field named "transId". Every time an action triggered, it has an uniq... See more...
Background information In our system, every visit consist of one or more actions. Every action has its name and in Splunk it's a field named "transId". Every time an action triggered, it has an unique sequence and in Splunk it's a field named "gsn". A customer has its unique id and in Splunk it's a field named "uid". During the period of a customer visit our system, he has an unique session id and in Splunk it's a field named "sessionId". If we want to locate a complete operation of a user, we need to use uid and sessionId together. Like many other systems, the order of actions in our system is fixed, under normal circumstances. What we want We want to create an alter to monitor the abnormal order of actions. For example, an important action named "D", it is at the last of an action-chain. Under normal circumstances, you must  access our system by the order of actions "A B C D". But some hackers my skip the trans B, which may be an action that verify his identity. The problem is I don't know the command to get abnormal results. We can accept that we need to input the order of actions for every action-chain. It's better to read the order by configuration file. What I've tried   | stats count by sessionId uid transId gsn _time | sort 0 sessionId uid _time   I can get every use's order of actions by this command. Can you give me some advice? If you want to get more information, you can ask me here. Best wishes!
Hi Everyone,   below is my query to use thousand comma separator: |inputlookup abc.csv | chart sum(field1) as field1 by field2, field3| addtotals | fieldformat/eval = tostring(field1, "commas")... See more...
Hi Everyone,   below is my query to use thousand comma separator: |inputlookup abc.csv | chart sum(field1) as field1 by field2, field3| addtotals | fieldformat/eval = tostring(field1, "commas").   in the result I am not getting commas in the field1 value. If I alter my query with only 1 field -> field2 or field3 then I get expected result. but I want sum of field by field 2 and field 3. can someone help me with this issue? Thanks, ND.
I have a record that results because it matches a particular sub string. Now, I want to extract the whole string the substring is part of. For ex. I give process completed as the sub string with my ... See more...
I have a record that results because it matches a particular sub string. Now, I want to extract the whole string the substring is part of. For ex. I give process completed as the sub string with my query which results in a record. This record has Takeover process completed with 390 files. Now, I want to get the whole Takeover process completed with 390 files string. How do I do this? Can somebody please help.
I want to find the difference between the maximum value and the minimum value in the multi-value field that has been lumped together with the transaction command. Specifically, group the web access... See more...
I want to find the difference between the maximum value and the minimum value in the multi-value field that has been lumped together with the transaction command. Specifically, group the web access logs by ID, and then I would like to find the time that the ID was operating from login to operation to logout. Do you have an idea for SPL?
Hi, I have multiple fields like, counting how many items passing through gates:       | timechart count(eval(like(gate_id, "RG%") )) as items_RG, count(eval(NOT like(gate_id, "RG%") )) as a... See more...
Hi, I have multiple fields like, counting how many items passing through gates:       | timechart count(eval(like(gate_id, "RG%") )) as items_RG, count(eval(NOT like(gate_id, "RG%") )) as all_items by building        I want to exclude the counts of items_RG from the all_items, so I'm using :       | eval Total=all_items-items_RG       But it is not showing Total in the output, but when I use stats instead, I don't get the time column to show the graph as timechart.       | stats count(eval(like(gate_id, "RG%") )) as items_RG, count(eval(NOT like(gate_id, "RG%") )) as all_items by building | eval Total=all_items-items_RG       I tried to use eventstats also couldn't get what I want.
        I have to extract the highlighted value as a single field in splunk. Any help.
Hi! I'm having a struggle trying to get Splunk to recognize a file that's in Asterisk Delimited Format. I have the props.conf set like this below, running on a Splunk 7.3.8 HF, sending the cooked dat... See more...
Hi! I'm having a struggle trying to get Splunk to recognize a file that's in Asterisk Delimited Format. I have the props.conf set like this below, running on a Splunk 7.3.8 HF, sending the cooked data to a 8.1.72 Search Peer. Nothing I've tried will get the data to parse correctly. Everything I'm reading, this should work. I've opened a support case, but I'm going around in circles with them, so if anyone has any thought here, I would appreciate it!       [ sourcetype ] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 disabled=false FIELD_DELIMITER=* FIELD_NAMES=timestamp,..... TRUNCATE=50000       Thanks, Stephen 
Hi All,  We have concern raised by one of our application team as they could see incorrect data in their dashboard, When validated the same by looking into the source of the file where the splunk is ... See more...
Hi All,  We have concern raised by one of our application team as they could see incorrect data in their dashboard, When validated the same by looking into the source of the file where the splunk is reading it, we noticed that there is no actual data present in the log source. Problem:  Getting incorrect data ingested into Splunk in the status field value [13/Apr/2022:06:33:03 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8444 status=2 [13/Apr/2022:04:30:01 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=2 [12/Apr/2022:11:10:27 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=2 [12/Apr/2022:09:11:37 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8444 status=2 Actual data present in the application server  Path:/var/mware/logs/xxx/localhost_access_log.2022-04-12.11.log [12/Apr/2022:11:10:26 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=200 size=216 response=1 [12/Apr/2022:11:10:27 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=200 size=216 response=1 [12/Apr/2022:11:10:27 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=200 size=219 response=1 [12/Apr/2022:11:10:27 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=200 size=216 response=0 [12/Apr/2022:11:10:27 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=200 size=219 response=0 [12/Apr/2022:11:10:27 +0000] fip="10.X.X.X" ip="10.X.X.X" method="POST" url="/xxx/service/decrypt" port=8443 status=200 size=216 response=1 Monitoring Stanza details: [monitor:///var/mware/logs/*/*localhost*] sourcetype = access_combined index = test disabled = 0 ignoreOlderThan = 1d blacklist=\.(gz)$ Splunkd.log : There is no significant ERROR|WARN|INFO related to this issue found. So could you please guide me what will be the reason why Splunk is ingesting an incorrect information when there is no actual data present in the source and also guide me how to troubleshoot this issue.
Hi Everyone, thanks to "kamlesh_vaghela" for helping me with importing the userid into the search query. But I am having trouble to convert the data to dataframe pandas, because of jsondecode error  ... See more...
Hi Everyone, thanks to "kamlesh_vaghela" for helping me with importing the userid into the search query. But I am having trouble to convert the data to dataframe pandas, because of jsondecode error      import requests import json import requests import urllib3 from base64 import b64encode import urllib.parse from csv import reader urllib3.disable_warnings() data=[] def fetch_data_using_userid(userid): url = "https://localhost:8089/servicesNS/admin/search/search/jobs/export" payload = { 'search': f'search index=_internal earliest=-1h sourcetype="{userid}" | stats count by sourcetype', 'output_mode': 'json' } safe_payload = urllib.parse.urlencode(payload) userAndPass = b64encode(b"admin:admin123").decode("ascii") headers = { 'Authorization': 'Basic %s' % userAndPass, 'Content-Type': 'application/x-www-form-urlencoded' } response = requests.request("POST", url, headers=headers, data=safe_payload, verify=False) return response.text # open file in read mode with open('user_data.csv', 'r') as read_obj: # pass the file object to reader() to get the reader object csv_reader = reader(read_obj) header = next(csv_reader) if header is not None: # Iterate over each row in the csv using reader object for row in csv_reader: # row variable is a list that represents a row in csv print(row) response_text = fetch_data_using_userid(row[0]) data.append(response_text) print(response_text)       the result which I am appending is not the the correct json format somehow, and because of that while converting it to dataframe, I am unable to separate the cells into different columns.  Has someone already worked with this, can suggest some ideas. thanks.
I cant seem to find an example parsing a json array with no parent. Meaning, I need to parse: [{"key1":"value2}, {"key1", "value2}]. But I only see examples with: "{"MyList" : [{"key1":"value2}, ... See more...
I cant seem to find an example parsing a json array with no parent. Meaning, I need to parse: [{"key1":"value2}, {"key1", "value2}]. But I only see examples with: "{"MyList" : [{"key1":"value2}, {"key1", "value2}]}   This is the json I have:           [{ "id": "123", "percentage": 25.0, "active": true, "second_id": "456", "creation time": "2022-04-13T09:30:06.517", "event_age": { "hours": 3, "minutes": 4, "seconds": 2 } }, { "id": "789", "percentage": 56.0, "active": true, "second_id": "222", "creation time": "2022-04-13T09:30:06.517", "event_age": { "hours": 6, "minutes": 2, "seconds": 2 } }]           I need to filter only the records which their event_age>4 and present it in a table. id percentage active second_id creation time event_age 789 56 true 222 2022-04-13T09:30:06.517 hours: 6, minutes :6, seconds: 2   Thanks! 
hi all, i try to run a cmd script on a UF.  it's located in %SPLUNK_HOME%\etc\apps\log4jscan\bin\log4jscan.cmd and the content is ..\static\log4j2-scan.exe --all-drives --scan-log4j1 --scan-logbac... See more...
hi all, i try to run a cmd script on a UF.  it's located in %SPLUNK_HOME%\etc\apps\log4jscan\bin\log4jscan.cmd and the content is ..\static\log4j2-scan.exe --all-drives --scan-log4j1 --scan-logback --csv-log-path "%SPLUNK_HOME%\var\log\log4jscan"  the inputs.conf is in %SPLUNK_HOME%\etc\apps\log4jscan\default and looks like this: (interval will be changed to run once per month) [script://..\bin\log4jscan.cmd] disabled = False interval = 600 i added> debug.txt 2>&1 to the script, but no file is created. any ideas? thanks...
Hi, I am trying to work with splunks ESS. Currently I am stuck. Is there any way we can alert user once he/she is added in investigation as a collaborator?
I am trying to run splunk for the first time using  C:\Program Files\splunk\bin>splunk start in windows cmd, but I am getting this error   Warning: cannot create "C:\Program Files\splunk\etc\licens... See more...
I am trying to run splunk for the first time using  C:\Program Files\splunk\bin>splunk start in windows cmd, but I am getting this error   Warning: cannot create "C:\Program Files\splunk\etc\licenses\download-trial" please help
i want to have an overview of malicious network traffic in my network and i decided to filter out all the "good" traffic to find the bad ones. I need a database of all the trusted IP addresses that c... See more...
i want to have an overview of malicious network traffic in my network and i decided to filter out all the "good" traffic to find the bad ones. I need a database of all the trusted IP addresses that contain the IPs of companies like social media (facebook, twitter etc), news (cnn, nbc etc), and all the other trusted websites that we often visit. is there a public database where all these IP addresses are kept so i can implement it on my splunk environment?
Hi everyone, I just watched an excellent demo / tutorial ( https://my.phantom.us/video/78/ ) by someone called Ian Forrest. During the video ( at about 45 minutes ) he demo's an excellent Custom Fu... See more...
Hi everyone, I just watched an excellent demo / tutorial ( https://my.phantom.us/video/78/ ) by someone called Ian Forrest. During the video ( at about 45 minutes ) he demo's an excellent Custom Function that looks in the cached SOAR internals for the cached results from previous executions of a specific app/action. He did mention that this was a 'work in progress' and I can't find this CF in Community Hub nor on Github anywhere.  Does anyone know what the status of his Custom Function is? Cheers, Mark.