All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have logs like below Email information for the template:payment_receipt_ppo_1 Posted Successfully with status:200 Email information for the template:DEV_1 Posted Successfully with status:200 ... See more...
I have logs like below Email information for the template:payment_receipt_ppo_1 Posted Successfully with status:200 Email information for the template:DEV_1 Posted Successfully with status:200 Email information for the template:payment_1 Posted Successfully with status:400 i want to display like below template ---------- payment_receipt_ppo_1 DEV_1 payment_1
I have logs like below Email information for the template:payment_receipt_ppo_1 Posted Successfully with status:200 Email information for the template:DEV_1 Posted Successfully with status:20... See more...
I have logs like below Email information for the template:payment_receipt_ppo_1 Posted Successfully with status:200 Email information for the template:DEV_1 Posted Successfully with status:200 Email information for the template:payment_1 Posted Successfully with status:400 i want to display like below template ---------- payment_receipt_ppo_1 DEV_1 payment_1
need guidance on situation below server1 with splunk install to be migrated to server2. plan is to detach the disk from server1 and attach it to server2. the server names will be different. do i ... See more...
need guidance on situation below server1 with splunk install to be migrated to server2. plan is to detach the disk from server1 and attach it to server2. the server names will be different. do i need to run a install of same version on top of it on server2 ,  as the server.conf and other files will have the old servername i.e server1 in its conf files..  
Documentation states that the latest supported version is iOS 13: https://docs.splunk.com/Documentation/MintIOSSDK/5.2.x/DevGuide/Requirementsandinstallation   Other crash analytics type products ... See more...
Documentation states that the latest supported version is iOS 13: https://docs.splunk.com/Documentation/MintIOSSDK/5.2.x/DevGuide/Requirementsandinstallation   Other crash analytics type products tend to state things like: "supports iOS 10+" - so wondering if this is: - just a documentation update required - whether it probably works fine but Splunk need to complete testing before they will support it - the SDK needs an update before it will support iOS 14  
Hi, We have successfully integrate Symantec WSS integration in our platform, and we start to recieve events but since 3 weeks ago where the Symantec WSS product was fully deployed in the company and... See more...
Hi, We have successfully integrate Symantec WSS integration in our platform, and we start to recieve events but since 3 weeks ago where the Symantec WSS product was fully deployed in the company and we start increase the volume of data the process of ingestion has stopped. Making a debub of the proccess we have identify that the .zip files that the Heavy Forwarder is downloading are not well formed and when Splunk monitor those files splunk is not able to finish, have someone suffer this kind of error? Any advice? Thanks for the help.
Hi everybody, We have a stream forwarder which sends every mail that enters in an index. It contains everything from the mail. I want to know if it is possible to see if the receiver of the mail ope... See more...
Hi everybody, We have a stream forwarder which sends every mail that enters in an index. It contains everything from the mail. I want to know if it is possible to see if the receiver of the mail opened the link that is in the mail or in the button? Thank you, Sasquatchatmars  
Hi I am moving from one machine to a 5 machine cluster. 1 SH 1 Master Node 3 Indexer What are the steps I need to migrate all my users? And to what machine do I need to copy the new files to? ... See more...
Hi I am moving from one machine to a 5 machine cluster. 1 SH 1 Master Node 3 Indexer What are the steps I need to migrate all my users? And to what machine do I need to copy the new files to? Thanks in Advance Rob
Does our old indexed data gets erased upon upgrading Splunk Enterprise version or does it remain safe?
I am trying to build a search bar and time range option to display the search results. According to the query given in the search bar, the result has to be displayed. As per the post - https://commu... See more...
I am trying to build a search bar and time range option to display the search results. According to the query given in the search bar, the result has to be displayed. As per the post - https://community.splunk.com/t5/Dashboards-Visualizations/How-do-you-add-a-search-bar-without-Advanced-XML/m-p/252873/highlight/true#M15754, I have taken the sample JS code to create search bar in my Splunk environment. I have added this sample code as search_bar.js under the directory $SPLUNK_HOME/apps/MyApp/appserver/static. Here is the sample code   var SearchBarView = require("splunkjs/mvc/searchbarview"); new SearchBarView({ id: "example-search-bar", managerid: "example-bar-search", el: $("#divToHangOn") }).render(); var TableView = require("splunkjs/mvc/tableview"); new TableView({ id: "example-table", managerid: "example-bar-search", pageSize: "5", el: $("#divToHangOn2") // seperate div }).render(); var SearchManager = require("splunkjs/mvc/searchmanager"); new SearchManager({ id: "example-bar-search", search: "index=_internal | head 100 | timechart count by sourcetype span=100s", }); // Hooking up events var manager = splunkjs.mvc.Components.getInstance("example-bar-search"); var searchbar = splunkjs.mvc.Components.getInstance("example-search-bar"); var timerange = searchbar.timerange; searchbar.on("change", function() { manager.set("search", searchbar.val()); }); timerange.on("change", function() { manager.search.set(timerange.val()); });   I have created a view/dashboard as search_sample.xml which is placed under the directory $SPLUNK_HOME/apps/MyApp/default/data/ui/views. Below is the code used in my dashboard   <dashboard script="search_bar.js"> <label>Search app using JS</label> <row> <panel> <html> <table style="width:100%"> <tr> <td style="width: 100%;"> <h3>Search</h3> <div id="divToHangOn"></div> </td> </tr> <tr> <td style="width: 100%;"> <h3>Results</h3> <div id="divToHangOn2"></div> </td> </tr> </table> </html> </panel> </row> </dashboard>   I am not getting the results as expected. Even after refreshing/restarting the Splunk app, I am getting label and heading which is used in the dashboard code. I am unable to get the search bar with time range to enter the required search query. Not sure where the issue is lying with. Could anyone please help me on this.
Hi Everyone! I don't understand, if use SAML authentication I don't access API !? I keep getting 404. For any request to API.     curl -u user:passwd -k https://splunk-api.xxxxxxxx/services/sea... See more...
Hi Everyone! I don't understand, if use SAML authentication I don't access API !? I keep getting 404. For any request to API.     curl -u user:passwd -k https://splunk-api.xxxxxxxx/services/search/jobs -d search="hello" {"text":"The requested URL was not found on this server.","code":404}       Thanks.  
Hi Splunkers, I have start using Splunk Logging Driver to get my docker logs into Splunk. I am using Splunk Enterprice 8.0.1. Problem is that indexer does not parse docker logs. I have tried with j... See more...
Hi Splunkers, I have start using Splunk Logging Driver to get my docker logs into Splunk. I am using Splunk Enterprice 8.0.1. Problem is that indexer does not parse docker logs. I have tried with json and raw formats but either seems not to be noticed by indexer. Current setup. HEC token used has source type _raw and all indexes allowed. Docker startup   docker run \ > --log-driver=splunk \ > --log-opt splunk-token=xxxx \ > --log-opt splunk-url=http://xxxxx:8088 \ > --log-opt splunk-format=raw \ > --log-opt tag="{{.Name}}/{{.FullID}}" \ > --log-opt labels=location \ > --log-opt env=TEST \ > --env "TEST=false" \ > --label location=xxxxx \ > containerId     props.conf   [source=http:docker] INDEXED_EXTRACTIONS=JSON KV_MODE = none AUTO_KV_JSON= false TRANSFORMS-class_to_xx_index = route_to_xx_index   transforms.conf   [route_to_xx_index] REGEX = .*\"xx\":\"xx\".* DEST_KEY = _MetaData:Index FORMAT = xx_index     All logs are going to default index. I have double checked that regex pattern matches and same pattern is working for universal forwarder, which logs are parsed and indexed correctly. Input I get to default index is one line   containerName/container location=xx TEST=false {"message":"User xxx does xxx","priority":6,"priorityName":"INFO","sessionId":"xxx","action":"auth/login","application":"xx","environment":"development","security_level":"xx","info":"xxx"}   which does not get parsed and index.  If I try with with _json token input to Splunk is "line" format and with same content and logs are also not parsed. Any idea what I am doing wrong here. How to get json formatted logs to be parsed?
We all know that foreach * will work on all the fields in the result row right... Well it seems that when using datamodels, the wildcard for fields is not what it seems   | from datamodel:DataMode... See more...
We all know that foreach * will work on all the fields in the result row right... Well it seems that when using datamodels, the wildcard for fields is not what it seems   | from datamodel:DataModelName | foreach * [ eval field_list=mvappend(field_list, "<<MATCHSTR>>") ] | table field_list, * _raw   With the above running on a suitably populated data model, the wildcard in the foreach and the one in the table statement give a different set of fields - but not always. The foreach * will normally show the data seen in the backing event mapped onto the data model, whereas the table * will show the datamodel fields names, even if those fields are null. However, I have run searches today, where the foreach statement is actually getting fields from the datamodel, where the fields do not exist in the data.  It's Splunk 7.3.4. I have run these two searches which give different results, which are even stranger. | from datamodel:Performance.CPU | head 1 | foreach * [ eval field_list=mvappend(field_list, "<<MATCHSTR>>") ] | table field_list, * _raw | from datamodel:Performance.CPU | foreach * [ eval field_list=mvappend(field_list, "<<MATCHSTR>>") ] | table field_list, * _raw | head 1 The first one gives me field_list containing all the fields from the Performance.CPU data model, including the inherited ones, such as hypervisor_id, but which are null and are definitely not in the data, whereas the second one just gives me a smaller list of the non null fields in the data model. However, the _raw data is the same in both cases. Does anyone know what * should mean when dealing with data models and should it work the same in both foreach and table cases.  
hi i have a data where there are two fields with password which i need to mask via props.conf and also in the search. the data looks like this : "this is the test message to demonstrate two fields o... See more...
hi i have a data where there are two fields with password which i need to mask via props.conf and also in the search. the data looks like this : "this is the test message to demonstrate two fields of password abc.password=QWERTYUI and in the same line we also have another password like xyyz.password=Q%1^WRTy." rex field=_raw mode=sed "s/abc\.password=\w+/abc.password=XXXXXXXX/g" i was trying my luck in the search first. i cannot do the masking in single rex sed command for both the passwords . i was able to do sucessfully for first one as its not having special characters. Regards, Moin
Hi, I want to filter the below rex value in where clause but its not working. | rex field=_raw ":[ ]*(?<error>[^:]+$)" | where error="No route to host (Host unreachable)" Kindly suggest. Thanks ... See more...
Hi, I want to filter the below rex value in where clause but its not working. | rex field=_raw ":[ ]*(?<error>[^:]+$)" | where error="No route to host (Host unreachable)" Kindly suggest. Thanks in advance!
Hi All,  For one of my apps (Google analytics App for Splunk), I am only getting 1000 events each day. Ideally this app sends all the google analytics data  into splunk, which includes server downti... See more...
Hi All,  For one of my apps (Google analytics App for Splunk), I am only getting 1000 events each day. Ideally this app sends all the google analytics data  into splunk, which includes server downtime and webpage upload time, so it can never by limited by quantity.  Can anybody please suggest what can be issue here ? Is there any limit to the number of events which we set up in any of the .conf files in Splunk? PS: the issue cant be at Google analytics End, This has been verified.  Thanks.  AG.     
hello, looking for help on a search query using lookup table col1,col2,col3 aaa,100,a@a.com bbb,200,b@b.com i need to use this lookup table and use col1 and col2 as search criteria to a base ... See more...
hello, looking for help on a search query using lookup table col1,col2,col3 aaa,100,a@a.com bbb,200,b@b.com i need to use this lookup table and use col1 and col2 as search criteria to a base query. If it yields any result I’ll need to use the email from col3 and send an email notification.  this i need to do for every row from the lookup table.  And this whole process needs to be automated.  am trying to find if such requirements were answered in this community but to no luck so far.  can you pls guide me here. Any pointers is appreciated. Thanks. 
Hello, Normal users should see a subset of a field extraction, small set of higher privilled users should be able to see more fields extracted from a log event in the search app. reason: deeper ana... See more...
Hello, Normal users should see a subset of a field extraction, small set of higher privilled users should be able to see more fields extracted from a log event in the search app. reason: deeper analysis capabilities for special analysts, limit field analysis and search time saving for normal users.   Can you please tell me, how this have to be implemented? Is there an easier approach than mine? What do I have to configure and where? Can I handle it in on Addon? Do I really save search time, if field extraction limited for the majority of the users? How can I measure the differences? My approach and actual (no) results: I created an add on's with report field extraction for specific sourcetypes (log events) - create an Addon ..._baseline with the field subset - all users are granted - create an Addon ..._all but with all fields extracted but limit access to a role "deep_data" - assigned the role to the user, who should see all the data   But there is no difference, if a user had the role or not.  By playing with some permission assignments I can enforce, that users can see the subset or the whole set. But it's not depends on the role assignment. It's just for all users.    Thx and Regards KaS
Hi team, 1. I have below query  <base query here> | rex field=_raw "POST\s+(?<RequestURL>.*)HTTP.*company\=(?<CMID>.*?)\&" | eval autosave=if(RequestURL like "%autosave=true%", "1", "0") | bin s... See more...
Hi team, 1. I have below query  <base query here> | rex field=_raw "POST\s+(?<RequestURL>.*)HTTP.*company\=(?<CMID>.*?)\&" | eval autosave=if(RequestURL like "%autosave=true%", "1", "0") | bin span=1d _time | stats count(eval(autosave=1)) as autosave count(eval(autosave=0 OR autosave=1)) as total by _time,SFDC | eval percent=round(autosave * 100 / total,2) | chart values(total) as total values(autosave) as autosave values(percent) as percent by _time, SFDC | ......   2. but the table returned to me is not formtted with what I sepcified in above query "bin span=1d _time", the table has two rows for each day, which is not what i want. please refer to below screenshot. 3.  here is expected result i want.  how to  modify the query to achive expected result?
Hi, I would like to use the splunk rest api to get a list of UF agents installed and their status, host, ip etc. Is there a way to do this with the api?
I have a query that looks for certain error messages and displays a list sorted by most common occurrence. My problem is that some of these error messages contain unique IDs, causing them to show as ... See more...
I have a query that looks for certain error messages and displays a list sorted by most common occurrence. My problem is that some of these error messages contain unique IDs, causing them to show as separate results. The first 3 messages below end with unique values - how do I update this query to output those error messages without the unique values after the * mark, or even replace with another string like "redacted"? index=mint event_name=APIServiceError appEnvironment=Release appVersionName>=3.0 ( extraData.message="Incorrect email or password for*" OR extraData.message="Street address and/or city is too long*" OR extraData.message="The following address could not be found:*" OR extraData.message="We failed to authorize your payment card. Please verify your payment card is valid." OR extraData.message="The network connection was lost." ) | stats count by extraData.message | sort -count