All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I use the add-on builder to create custom apps for interacting with the AWS API through Splunk, but I've found that some of the more recent boto3 features are missing from the version embedded within... See more...
I use the add-on builder to create custom apps for interacting with the AWS API through Splunk, but I've found that some of the more recent boto3 features are missing from the version embedded within Splunk. Does anyone know how to update Splunk's embedded boto3 version?
Hello all, I have a clustered indexer and SH environment. I'm now noticing that there's a long delay in some of my data showing up. I can see that the logs are being continuously generated at the... See more...
Hello all, I have a clustered indexer and SH environment. I'm now noticing that there's a long delay in some of my data showing up. I can see that the logs are being continuously generated at the source but they do not show up in Splunk until a long time later. Some items I'm not able to search on until the next day. The UF is set to monitor a directory with all .log files to be read and sent to Splunk. No issues with permissions and no fw blocks either. Additionally, the exact same configurations seem to work on my qa servers but not on the prod ones. The biggest difference between the two is the log volume, approximately in the ratio 1:600 qa to prod. Also, the file is set to roll over to archive once it hits the size of 50MB. Does this have something to do with the zipping/archiving? Splunk unable to read when other processes are writing to the file, reach the limit and zip before Splunk can do anything?? Or would this concern something in the pipelines or limits.conf? All help is appreciated.
hi i add a + or a - sign before a percent result like this   | eval perc=if(s<2,"-","+").round((s/2)*100,1). "% "    But I need to substract 100 to the percentage result like below   |... See more...
hi i add a + or a - sign before a percent result like this   | eval perc=if(s<2,"-","+").round((s/2)*100,1). "% "    But I need to substract 100 to the percentage result like below   | eval perc=if(sam<sam2,"-","+").round(100-(sam/sam2)*100,1). "% "   but when I do this, I have + and - before the percent result   how to avoid this please?  
How can I configure a different user and password other than admin to make rest end point calls to my Universal Forwader.  Current Functionality:    curl -k -u admin:changeme "https://<host>:<port... See more...
How can I configure a different user and password other than admin to make rest end point calls to my Universal Forwader.  Current Functionality:    curl -k -u admin:changeme "https://<host>:<port>/services/receivers/simple?index=abc&source=test&sourcetype=test" -d "splunk rest test"   What I want:   curl -k -u mu_user:mypwd "https://<host>:<port>/services/receivers/simple?index=abc&source=test&sourcetype=test" -d "splunk rest test"     I tried putting this new user in authentication.conf with binddnuser and binddnpassword but it is throwing Unauthorized error.
Dear community, I am using this community since years, so far I've found everything I needed. Now I am stuck!!! I am trying the following: I want to list all the index'es fields so when I build... See more...
Dear community, I am using this community since years, so far I've found everything I needed. Now I am stuck!!! I am trying the following: I want to list all the index'es fields so when I build a query, to know immediately if a specific source has that field. Second part is easy. Once I have the list I know what I need to do. So, basically, I need something like this: Fields index1 index2 index3 indexn field1 1 1 0 1 field2 0 0 1 1 fieldn 1 1 1 1   where 0 is when the field doesn't exist, 1 there is at least one value in the specific field. My search looks like: index IN ( index 1 index2 indexn ) | stats count(*) as * by index | transpose column_name=Field header_field=index |outputlookup whateverfile.csv The problem with this search is that it takes ages, I don't need a full count. I just need to count the first value it gets and stop and then move on. In this way I will have a count of 0 if the field doesn't exist, 1 if exists. Any ideas?    
Hi Team, I am trying to take the backup of lookups using search head console and for the same I have tried two ways. a) Using below Rest Command | rest /servicesNS/-/-/properties/lookups Issue :-... See more...
Hi Team, I am trying to take the backup of lookups using search head console and for the same I have tried two ways. a) Using below Rest Command | rest /servicesNS/-/-/properties/lookups Issue :- Since we have only limited permissions, hence the links of lookups are not working. b) | inputlookup abcd.csv | append [inputlookup wxyz.csv] Issue:- I could see the output of both the .csv files but unable to identify the content from where abcd.csv or  wxyz.csv is starting. Can anyone please suggest the best possible way to do it from splunk gui since we have only power user access.
I am trying to send data to a Splunk Cloud free trial account. Following the documentation here: https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector This is what I sh... See more...
I am trying to send data to a Splunk Cloud free trial account. Following the documentation here: https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector This is what I should use You must send data using a specific URI for HEC. The standard form for the HEC URI in Splunk Cloud Platform free trials is as follows: <protocol>://http-inputs.<host>.splunkcloud.com:<port>/<endpoint> But the domain name does not exist (the subdomain with http-inputs. part) Is the documentation wrong? How do I get this working?  
I want to change bin value ranges in calendar heat map. How can I do that? I don’t want by default bin values over there in heatmap.
ITE work app was installed from back end and when we tried opening the page it showed Internal Server Error and the app is not loading
Hi, I can successfully log in to my account overview, however from there, when I click "Launch AppDynamics" (see image) I get taken to another login screen and I can't get further takes me here... See more...
Hi, I can successfully log in to my account overview, however from there, when I click "Launch AppDynamics" (see image) I get taken to another login screen and I can't get further takes me here Looking through the forum I saw a ppst that said to go to  https://help.appdynamics.com/support but if I try that I get: I tried resetting my password with "forgot password" but it didn't work. I got no email. Looking at the debug console in chrome I see the request with this response: Any help would be appreciated Regards, Doug ^ Post edited by @Ryan.Paredez to remove images that show the Controller name and URL. For security and privacy reasons, please do not share your Controller URL on the community forum.
I downloaded the Splunk-Windows-64.zip There is no install file, no setup file. Nothing I can find to install the program with. Did I miss something or did you guys intentionally leave that out?
Sorry for the bad translation. I have a Cloud client. The license is 50GB by day Additional DDAA has been contracted about what is not very clear to me, the shared documentation seems to be outdat... See more...
Sorry for the bad translation. I have a Cloud client. The license is 50GB by day Additional DDAA has been contracted about what is not very clear to me, the shared documentation seems to be outdated or not available. https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/User/DataArchiver https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Service/SplunkCloudservice#Storage https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Service/SplunkCloudservice#Search When I go to "Settings" - "Indexes" I can see the indexes used by this client and the others that are internal to splunk from what I see.   I see that one of the indexes has already reached the maximum size of 500GB and I don't know if it has the DDAA active. According to this image I understand that the DDAA is active? I must do something? I am worried if information is being lost since the client needs to retain that data for a long time    
Hi All, I need help in staring to setup the Splunk connect for syslog (SC4S),  I am not sure how to start and what procedure and documentation to follow.  I am using splunk cloud 8.2.1.
Hi all need help getting the trailing number from a field in a search. Examples of the field id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-423447 id = bdf73ad5-4499-4f70-b7e3-e2c81a... See more...
Hi all need help getting the trailing number from a field in a search. Examples of the field id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-423447 id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-6672 id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-4232323 I was using....         | eval stripped_asset_id=substr(id, -6)           however that only is consistent if the last numbers consist of 6 digits which it often may have more or less. How can I take everything after the last dash "-"?
Is it possible to ingest data related specifically from Microsoft Defender Safe Links?  We have tried both Microsoft 365 Defender Add-on for Splunk and the Splunk Add-on for Microsoft Security withou... See more...
Is it possible to ingest data related specifically from Microsoft Defender Safe Links?  We have tried both Microsoft 365 Defender Add-on for Splunk and the Splunk Add-on for Microsoft Security without success.  Appears that both of these collect data from Incidents and alerts. Any help appreciated.  
Is there a way to create a report using metadata or any other data to list all the fields that are available by index and sourcetype.  Example  Just need to get a index, sourcetype an... See more...
Is there a way to create a report using metadata or any other data to list all the fields that are available by index and sourcetype.  Example  Just need to get a index, sourcetype and all available fields under them listed out as report. 
I am trying to build an Splunk addon via there API. I have 1800 input entries that are set poll every 24 hours. the problem I'm seeing is that I get at http 429 error from the API destination. Is the... See more...
I am trying to build an Splunk addon via there API. I have 1800 input entries that are set poll every 24 hours. the problem I'm seeing is that I get at http 429 error from the API destination. Is there a way to tell Splunk to only run a single API at a time to not overload the destination server?
After upgrading the Splunk Add-on for Microsoft Office 365 to version 3.0.0 it is required that we disable ServiceHealth.Read.All in Office 365 Management APIs, and enable ServiceHealth.Read.All in M... See more...
After upgrading the Splunk Add-on for Microsoft Office 365 to version 3.0.0 it is required that we disable ServiceHealth.Read.All in Office 365 Management APIs, and enable ServiceHealth.Read.All in Microsoft Graph as per the app doc. After following the instruction and assigning the delegated type to ServiceHealth.Read.All under the Microsoft Graph , I'm getting the below error in the logs: level=ERROR pid=23448 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api.GraphApiConsumer pos=GraphApiConsumer.py:run:74 | datainput=b'ServiceUpdateMessages' start_time=1651772811 | message="Error retrieving Graph API Messages." exception='NoneType' object is not iterable The inputs under Office 365 Management APIs are working fine, which indicates that the configuration data like client id and secret are correct. Can someone please let me know what might be causing this issue?
I have a field extraction I've created that replaces a couple of previous extractions I deleted.  However I have a couple of reports that still reference the deleted extractions when I view the avail... See more...
I have a field extraction I've created that replaces a couple of previous extractions I deleted.  However I have a couple of reports that still reference the deleted extractions when I view the available fields in the events.   I've tried re-creating the report and still get the same behavior.  I will also mention if I change the evtid in the query below to another possible value, I get available fields I expect to see.  Any ideas what might be going on?  The extracted field is vmax_message.  vmax_host is also an extracted field and works just fine.         index=vmax_syslog sourcetype=vmax:syslog fmt=evt vmax_host=*san* evtid=5200 sev="warning" | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") | chart values(symid) AS symid values(vmax_message) AS message values(sev) AS severity values(Time) as Time by vmax_host      
Hello, Are there any ways we can check from SPLUNK GUI that UF has installed on a SERVER/HOST? Any help will be highly appreciated. Thank you!