All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to find matches for field b, when there is a partial match in field a. I have field a which is an imported csv with hostname/IP's, field b is from an index search. Is there a way to find ... See more...
I am trying to find matches for field b, when there is a partial match in field a. I have field a which is an imported csv with hostname/IP's, field b is from an index search. Is there a way to find which in field b match one of the field a values whether it is server name or IP? I've tried some combinations of eval case statements(match/like),  attempted regex but from my understanding you have to provide the value rather than a field type. Would I need to run some sort of loop searching field a for all values in field b - seems like this would be pretty resource heavy and inefficient? End result is I would do a count on matches, if field a is a 0 or 1, then I would make my chart for that value. Any direction or advice would be greatly appreciated, even just a point to a specific part of the documentation (currently on 7.3)! Let me know if more data is needed and I'll be glad to sanitize and provide more output, was trying to keep the post short.   ex: field a server1,10.0.0.7,10.0.0.8 server2,10.0.0.9,10.0.0.10 field b server1 10.0.0.9  
There are 100s  of APIs in my application. I'm logging exception for an API. I can get stats to get total no of exception in a time window by using    Exception |stats count by uri   This will gi... See more...
There are 100s  of APIs in my application. I'm logging exception for an API. I can get stats to get total no of exception in a time window by using    Exception |stats count by uri   This will give me result in tabular format exception count for each uri. However, I would like to get this data in timechart for each uri. This can easily be done if I hardcode the uri and get exception count on time series but I don't want to do this for 100s of APIs.  | timechart count by api puts many under OTHER and NULL category. However,  I would want api as row and time as column..preferably in visual format - timechart against each api  
Hi any one know ..why I am getting an error while installing splunk enterprise setup ..I get the error as splunk enterprise setup ended prematurely because an error. Your system has not been modified... See more...
Hi any one know ..why I am getting an error while installing splunk enterprise setup ..I get the error as splunk enterprise setup ended prematurely because an error. Your system has not been modified...if any one knows the answer please help ..me
That's the problem. I have a Sysmon JSON to examine but, although in the "Add Data" section everything looks OK, once I get to the search time Events are joined and split without pattern. After I th... See more...
That's the problem. I have a Sysmon JSON to examine but, although in the "Add Data" section everything looks OK, once I get to the search time Events are joined and split without pattern. After I thought about it a second, I noticed that the document did not have timestamps and Splunk was complaining about it, so I solved it but the issue was still there. The problem looks like this: Sometimes the Event begins OK but melts with other events before its end, and others (like the upper example) don't have a heading. I'm not using any plug-ins or apps. Just clicked on Add Data, selected the document as a non-timestamped JSON, and started searching. How would you solve this? the JSON to be analyzed is downloadable from Blue Team Labs Online.  
This is my _raw data consists 06/24/2021 17:26:17 +0530, info_search_time=1624535777.471, Dns Rule=Passed, HOSTNAME=Passed, username=Passed, ssh Timeout rule=Passed, Node Name="IND-JLN-DIV-COR-SW-02... See more...
This is my _raw data consists 06/24/2021 17:26:17 +0530, info_search_time=1624535777.471, Dns Rule=Passed, HOSTNAME=Passed, username=Passed, ssh Timeout rule=Passed, Node Name="IND-JLN-DIV-COR-SW-02", snmp rule=Passed, udld Rule=Passed, Enable Password=Passed, Snmp config rule=Passed, Line Vty 0 4 Timeout & acl=Passed, Line Con 0 timeout=Passed, Service Policy=Passed, Https Rule=Passed, Line Con 0=Passed, Line aux 0=Passed, Node Ip Address="3.205.208.35", Don't Username=Passed, Service Password Encryption=Passed, Aaa Server-GE=Passed, Line Vty 5 15=Passed, Image Verification=Passed, Bootp Server=Passed, Config Title="4/26/2021 01:02 PM - Running", Line Vty 0 4=Passed, Logging Rule=Passed, Banner Rule=Passed, Config Type=Running, Finger Rule=Passed, Http Server=Passed, Name Server=Passed, Pad Service=Passed, System Boot=Passed, Telnet Rule=Passed, Trap Source=Passed, NTP Rule- GE=Passed, ftp service=Passed, ssh version=Passed, Source Route=Passed, Http Access Class=Passed   I need some of the fields to be extracted from that data Dns Rule=Passed, HOSTNAME=Passed, username=Passed, ssh Timeout rule=Passed, Node Name="IND-JLN-DIV-COR-SW-02.genpact.com", snmp rule=Passed, udld Rule=Passed, Enable Password=Passed, Snmp config rule=Passed, Line Vty 0 4 Timeout & acl=Passed, Line Con 0 timeout=Passed, Service Policy=Passed, Https Rule=Passed, Line Con 0=Passed, Line aux 0=Passed, Node Ip Address="3.205.208.35", Don't Username=Passed, Service Password Encryption=Passed, Aaa Server-GE=Passed, Line Vty 5 15=Passed, Image Verification=Passed, Bootp Server=Passed, please help with the solution. it would be appriciated
Hi all. Like the subject, can i tell an HF not to PARSE the events, just do a banal tcp forwarding of the raw data? I can replace an HF with a banal TCP-FORWARDING tool, and it works. But the quest... See more...
Hi all. Like the subject, can i tell an HF not to PARSE the events, just do a banal tcp forwarding of the raw data? I can replace an HF with a banal TCP-FORWARDING tool, and it works. But the question is about the HF, since i need to deploy all props/transforms in INDEXER BUT ALSO IN HF, if i do not want to index erroneous events...   I mean, outputs from UF is balanced from real INDEXER and HF (do not question ) RIGHT SCENARIO (props/transforms in both Indexer/HF) UF --> IDX --> parsing --> correct events UF --> HF --> parsing --> IDX --> correct events WRONG SCENARIO (props/transforms only in Indexer, not in HF) UF --> IDX --> parsing --> correct events UF --> HF (bad event parsing, no timestamp no linebreak etc..) --> IDX --> event already bad parsed --> erroneous events indexed!!!   Thanks. Bye...
Hi Team, Trust you are doing well, I recently joined as a member of Global voice and video remote infrastructure team of one of your clients/customers which is worldpay/FIS, how do I associate myse... See more...
Hi Team, Trust you are doing well, I recently joined as a member of Global voice and video remote infrastructure team of one of your clients/customers which is worldpay/FIS, how do I associate myself with my company's account who is my CSM my email address is  perhaps the account might be listed with some one with fisglobal.com or worldpay   kindly assist  Mujeeb 00917337528066  
Hi, starting fresh. maybe I can explain a bit better here..   I found another similar issue to mine here:   https://community.splunk.com/t5/Getting-Data-In/How-to-split-a-json-array-into-multiple-eve... See more...
Hi, starting fresh. maybe I can explain a bit better here..   I found another similar issue to mine here:   https://community.splunk.com/t5/Getting-Data-In/How-to-split-a-json-array-into-multiple-events-with-...               I need it to break out the 20+ items in the string. For some reason setting up my source type like in this post, just gives me the first user worth of info. It doesnt break them all out..   Here is a dump of one of the raw json requests.  its truncated at then end, looking into that.  But i basically need to break out each user in this list with their stats just like that previous post above talks about.   any help would be appreciated.    
Hi Splunkers My post is about of the management of Session  of authetication  Method by LDAP, because we need the control the multisession in Splunk, because today we can have two session in Splunk ... See more...
Hi Splunkers My post is about of the management of Session  of authetication  Method by LDAP, because we need the control the multisession in Splunk, because today we can have two session in Splunk Web from two sources o PCs, and the ideal is close a session in previous source o PC, if the user is the same in the two sources or PCs. and only have active the last session I dont know,  how to resolve this point or if  have some parameter to config in splunk web.conf or some recommendation, Thanks
Hi fellow Splunkers! I am an admin for our Splunk Enterprise Environment and when we have users on any of the teams that we support leave their teams or leave the company we try to stay on top of re... See more...
Hi fellow Splunkers! I am an admin for our Splunk Enterprise Environment and when we have users on any of the teams that we support leave their teams or leave the company we try to stay on top of reassigning the knowledge objects that they owned to a current member of that team. We do this from the UI because we run 2 clustered environments with 3 SH's each.We reassign these objects by navigating to Settings > All configurations > Reassign Knowledge Objects I have come across an issue where I am unable to reassign field extractions with colons in their name. Examples: wineventlog:security : EXTRACT-WindowsSecurityFields source::/var/opt/jfrog/artifactory/logs/request.log : EXTRACT-Action When I attempt to reassign these I get the following error: Has anyone else run into this? Has anyone found a solution (other than reassigning these from the back end)? Any feedback is appreciated!
I want to set dynamic SLA's for File Processing.  In order to do this I need to: 1. get the earliest HH:MM:SS the job has processed in the last 30 days. 2. get the latest HH:MM:SS the job has proce... See more...
I want to set dynamic SLA's for File Processing.  In order to do this I need to: 1. get the earliest HH:MM:SS the job has processed in the last 30 days. 2. get the latest HH:MM:SS the job has processed in the last 30 days. 3. get the average time the jobs process in the last 30 days. 4. get the difference between the earliest & latest. Most of what I have found around stats with earliest & latest includes the date, so I end up with the time the job ran on day 1 and day 30.  I need the earliest/latest by HH:MM:SS and then diff it?  
We are using splunk cloud 8.2.2105.2 (Build: 164754a2784c). When we came into work this morning, we found a number of our dashboards are suddenly not rendering correctly, and we're at a loss to expla... See more...
We are using splunk cloud 8.2.2105.2 (Build: 164754a2784c). When we came into work this morning, we found a number of our dashboards are suddenly not rendering correctly, and we're at a loss to explain why. It would see some centering is no longer happening correctly, and font sizing has changed. You can see this through a very simple dashboard that I put together: <dashboard hideChrome="true" hideFilters="true" theme="light"> <label>Basic Dashboard</label> <row> <panel> <single> <search> <query> | makeresults count=1 | streamstats count | eval msg = 5 | stats count</query> </search> <option name="underLabel">A field we should see</option> <option name="height">50</option> </single> </panel> </row> </dashboard>   When this is rendered, I get:   You can see how the text ("1") is large, and cut off, and the label ("A field we should see") does not show. Whatever is going on has affected a number of our production dashboards; again, things were fine last night but this morning .... the rendering is bad. We have not had to mess with style sheets in the past to get the rendering correct, and don't think we should have to here ... What changed, and how can we get our dashboards rendering properly again? Help? Thx john
Enter the correct URL and api token but phantom will add-on the URL when testing.
we have  Splunk TA connectivity app on our Splunk cloud and I have my status field as either successful or field with unique values but I realize that some of the values are not matching as in some o... See more...
we have  Splunk TA connectivity app on our Splunk cloud and I have my status field as either successful or field with unique values but I realize that some of the values are not matching as in some of the connectivity failure status is not getting populated.  please does anyone has any proposition. 
Hello All, I have configured TA-MS-defender and we are collecting ATP logs just fine.  But the Incident logs keep giving me the following error:   2021-06-25 09:36:35,832 ERROR pid=4306 tid=MainTh... See more...
Hello All, I have configured TA-MS-defender and we are collecting ATP logs just fine.  But the Incident logs keep giving me the following error:   2021-06-25 09:36:35,832 ERROR pid=4306 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS_Defender/bin/ta_ms_defender/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS_Defender/bin/microsoft_365_defender_incidents.py", line 72, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS_Defender/bin/input_module_microsoft_365_defender_incidents.py", line 69, in collect_events incidents = azutil.get_atp_alerts_odata(helper, access_token, incident_url, user_agent="M365DPartner-Splunk-M365DefenderAddOn/1.3.0") File "/opt/splunk/etc/apps/TA-MS_Defender/bin/azure_util/utils.py", line 57, in get_atp_alerts_odata raise e File "/opt/splunk/etc/apps/TA-MS_Defender/bin/azure_util/utils.py", line 40, in get_atp_alerts_odata r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS_Defender/bin/ta_ms_defender/aob_py3/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.security.microsoft.com/api/incidents?$filter=lastUpdateTime+gt+2000-01-01T00:00:00Z   Any ideas or help?   thanks ed
Hi, I'm trying to add something in between to link to another dashboard within the same app when user clicks the table. Could someone help with the syntax here? Thank you in advance!   myplaintable... See more...
Hi, I'm trying to add something in between to link to another dashboard within the same app when user clicks the table. Could someone help with the syntax here? Thank you in advance!   myplaintable.on("click", function(e) { // Bypass the default behavior e.preventDefault(); // add the linkage // Displays a data object in the console console.log("Clicked the table:", e.data); });    
Hello Team,   can you please suggest how could we make the JDBC connection to a test IBM i LPAR from Splunk IBM i LPAR Running on AS400 Regards Hitesh
We have a server class configuration that looks something like this:   [serverClass:ewda_nonprod_rw] blacklist.0 = eon-prod* whitelist.0 = eon-test* whitelist.1 = eon-* [serverClass:ewda_nonprod_r... See more...
We have a server class configuration that looks something like this:   [serverClass:ewda_nonprod_rw] blacklist.0 = eon-prod* whitelist.0 = eon-test* whitelist.1 = eon-* [serverClass:ewda_nonprod_rw:app:ewda_nonprod_rw] #restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled     After installing the Splunk Universal Forwarder, if I rename a Windows Server computer to eon-avt-api-i-xxxxxxxxxx, set the default hostname in  C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf to the same name, and restart the Splunk service then the ewda_nonprod_rw app will be deployed to the computer and all the correct logs will be shown in Splunk Cloud under the hostname eon-avt-api-i-xxxxxxxxxx.   We no longer want to rename the computer to match the hostname we want to use for Splunk but I can not get the ewda_nonprod_rw to be deployed to client without renaming the computer. If I do not rename the computer and only set the default hostname in  C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf to eon-avt-api-i-xxxxxxxxxx and restart the Splunk service then the ewda_nonprod_rw app will not be deployed to the computer and the only logs available in Splunk Cloud under the hostname eon-avt-api-i-xxxxxxxxxx  are from the default splunkd and wineventlog sourcetypes. I have also tried setting the server.conf file's serverName to eon-avt-api-i-xxxxxxxxxx with no luck.
Please advise on a Strategy dealing with increasing number of skipped / saved / deferred searches in Enterprise Security? The numbers are increasing daily .Thank u
Hello everyone, I am new to Splunk and learning the ropes. I am stuck on a query I am trying setup. I have SNMP data coming in and I am trying to measure traffic in Mbps or Kbps. SNMP uses a conti... See more...
Hello everyone, I am new to Splunk and learning the ropes. I am stuck on a query I am trying setup. I have SNMP data coming in and I am trying to measure traffic in Mbps or Kbps. SNMP uses a continuous bit counter which continuous adding traffic to a total sum. In order to get Mbps I have to use the following calculation: ((Current_Value - Previous Value) / (current_time - previous_time)) and then convert bytes to Mbps. This is working fine, however I want to be able to do this "foreach" interface for a dashboard. Right now when I use a wildcard for interface name it breaks, because the delta calculation doesn't always use the same interface. index=sample name="interfaces" ifName="ethernet1/1" | where bytes_in!=0 | sort _time | delta _time AS time_delta | delta bytes_in AS delta_bytes_in | eval Kbps = (((delta_bytes+in *8 )/1000 / time_delta) | eval Mbps = Kbps/1000 | table _time, Mbps When I switch ifName="ethernet1/1" to  "ifName="*" this breaks. I was hoping to use foreach to iterate over each interface, but do not know how. Was hoping someone could help me with this