All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers! Wish You are safe and healthy from this pandemic. I am using Splunk 8.0 Version, because it gives the flexibility with Python Versions (Python 2 and Python 3). I have 2 different ... See more...
Hello Splunkers! Wish You are safe and healthy from this pandemic. I am using Splunk 8.0 Version, because it gives the flexibility with Python Versions (Python 2 and Python 3). I have 2 different Splunk Applications, that gives me scripted inputs. 1 of the app executed on Python 2.7.17 and another runs on Python 3.7.17 How can i achieve the inputs from both of these apps?
Hi Team, We are working on Azure AD Integration with Splunk enterprise to control access of users from Azure AD. have  referred the below documents but have question on this, as per below link it st... See more...
Hi Team, We are working on Azure AD Integration with Splunk enterprise to control access of users from Azure AD. have  referred the below documents but have question on this, as per below link it states that user needs to be created in Azure AD as well in Splunk enterprise end by this i am confused what's the use and how its helpful to do this integration. Can some one elaborate on this? Because use of this integration is to Manage your accounts in one central location: the Azure portal but if u need to create user in Splunk also then how its helping to store in one location? https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/splunkenterpriseandsplunkcloud-tutorial#:~:text=the%20configuration%20works.-,Configure%20Azure%20AD%20SSO,on%20method%20page%2C%20select%20SAML. Regards, Shweta  
I have two queries and i want to display both the query result in line chart (one line in the line chart from the result of query 1 and  another line in the line chart from the result of query 2) ... See more...
I have two queries and i want to display both the query result in line chart (one line in the line chart from the result of query 1 and  another line in the line chart from the result of query 2) below is the query which i append the two queries ,but i am not getting proper line chart index="cx_aws" host="aw-lx0244.deltadev.ent" source ="pf-enrollee-family-roster-service" AND ("/persons/" OR "/contracts/") AND HttpStatusCode|bucket _time span=1h | stats count by _time |append [search index="cx_aws" host="aw-lx0244.deltadev.ent" source="pf-enrollee-family-roster-service" AND ("/persons/" OR "/contracts/") AND HttpStatusCode | eval TimeTaken3 = trim(replace(TimeTaken, ",","")) | eval REQUESTED_URL2 = trim(replace(REQUESTED_URL, "/contracts/",""))| eval REQUESTED_URL3 = trim(replace(REQUESTED_URL2, "/enrollees","")) | sort -num(TimeTaken3) | WHERE TimeTaken3>10000|bucket _time span=1h | stats count by _time]  please suggest in order to achieve the two line sin the line chart(one for each query)
while opening the splunk it nt openinng cn u tell how to  open it  
Hello, for your information can't query anymore index with LDAP data (getting indexers memory errors) due to to OpenLDAP Add-on for Splunk (CIM) at https://splunkbase.splunk.com/app/3520/ Lookups :... See more...
Hello, for your information can't query anymore index with LDAP data (getting indexers memory errors) due to to OpenLDAP Add-on for Splunk (CIM) at https://splunkbase.splunk.com/app/3520/ Lookups : openldap_user_lookup & openldap_src_lookup
Hello everybody, in my dashboard I have a time selection at the top: </input> <input type = "time" token = "time"> <label> TimePicker </label> <default> <earliest> -14d @ h </earliest> <latest... See more...
Hello everybody, in my dashboard I have a time selection at the top: </input> <input type = "time" token = "time"> <label> TimePicker </label> <default> <earliest> -14d @ h </earliest> <latest> now </latest> </default> </input> now the event list "Failed Event AccesLog" shows the errors with different timestamps: <row> <panel> <event> <title> Failed Event AccessLog </title> <search> <query> `log_index` $ environment $ $ RepoName $ AND" // contentserver / fff / "AND 500 | timechart count by contRep </query> <earliest> $ time.earliest $ </earliest> <latest> $ time.latest $ </latest> </search> <option name = "list.drilldown"> none </option> <option name = "refresh.display"> progressbar </option> </event> </panel> Now I want the event list "Imagemaster Log" to take the timestamps from the Event Access Log and the Imagemaster log compares these with a minimum and maximum of 2 seconds.  for each timestamp. Its Like a time range for each eventoutput from the Accesslog. <panel> <event> <title> Failed Events Imagemaster Log </title> <search> <query> index = "xxx_log" OR index = "xxx-log" source = "/ xxx / yyy / zzz / ImageMaster.log" "Error" "* ContentServer *" </query> <earliest> $ field1.earliest $ </earliest> <latest> $ field1.latest $ </latest> </search> <option name = "list.drilldown"> none </option> <option name = "refresh.display"> progressbar </option> <option name = "rowNumbers"> 0 </option> <option name = "table.drilldown"> none </option> <option name = "type"> table </option> </event> </panel> </row> Unfortunately, I can't come up with a solution. I am thankful for any help.
Hi, Search heads are not showing under deployer even after performing below steps In deployer --> server.conf [shclustering] pass4SymmKey = passkey shcluster_label = shcluster1 In Search Heads ... See more...
Hi, Search heads are not showing under deployer even after performing below steps In deployer --> server.conf [shclustering] pass4SymmKey = passkey shcluster_label = shcluster1 In Search Heads - 3 search Heads ./splunk init shcluster-config -auth admin:password -mgmt_uri https://SH1-IPaddress:8089 -replication_port 34567 -replication_factor 3 -conf_deploy_fetch_url http://deployerIPaddress:8089 -secret passkey -shcluster_label shcluster1 ./splunk restart ./splunk init shcluster-config -auth admin:password -mgmt_uri https://SH2-IPaddress:8089 -replication_port 34567 -replication_factor 3 -conf_deploy_fetch_url http://deployerIPaddress:8089 -secret passkey -shcluster_label shcluster1 ./splunk restart ./splunk init shcluster-config -auth admin:password -mgmt_uri https://SH3-IPaddress:8089 -replication_port 34567 -replication_factor 3 -conf_deploy_fetch_url http://deployerIPaddress:8089 -secret passkey -shcluster_label shcluster1 ./splunk restart ./splunk bootstrap shcluster-captain -servers_list "https://SH1-IPaddress:8089,https://SH2-IPaddress:8089,https://SH3-IPaddress:8089" -auth admin:password ./splunk show shcluster-status -auth admin:password ./splunk show kvstore-status -auth admin:password
Hello, I've create a search which contains (...(CallerCountry="CN")). When I take a look in the search log in the job inspector (to getting information about my search), I wonder why Splunk change ... See more...
Hello, I've create a search which contains (...(CallerCountry="CN")). When I take a look in the search log in the job inspector (to getting information about my search), I wonder why Splunk change the original (...(CallerCountry="CN")) to (...(__f!=v OR CallerCountry="CN")). The result of this search is "better" then the result of my original search. Did anybody knows what __f!=v means? I couldn't find anything about it in the Splunk documentation? PS: I use Splunk Enterprise 8.0.8 Thanks for your support Manuel
HI I have two queries ,and i need to display the results from the both the queries in one line graph report
Hello all,   I am trying to run the below query and when I change the earliest to last 7 days I am getting the below error. However, it is running fine if I add -30d for earliest search. `acn_pa... See more...
Hello all,   I am trying to run the below query and when I change the earliest to last 7 days I am getting the below error. However, it is running fine if I add -30d for earliest search. `acn_patchmanagement_macro_serverdetails_t1_001` |where NOT IN (Server,[|search earliest=-7d latest=now() `acn_patchmanagement_macro_serverdetails_t1_001` |stats count(Server) by Server|table Server]) |lookup acn_patchmanagement_lookup_server-details_001 Server OUTPUT OS_Type OS_SubType | eval OS_Type=if(isnull(OS_Type), "NA", OS_Type) | eval OS_SubType=if(isnull(OS_SubType), "NA", OS_SubType) |append [| inputlookup acn_patchmanagement_lookup_server-details_001.csv ] |fields Last_Patched_Date ChangeNo Server OS_Type OS_SubType Overall_Status  Below is the error: Error in 'where' command: The expression is malformed. An unexpected character is reached at ') ) '.   Please let me know the solution for this.
Hi there. Is there a SPL query length limit on a dashboard panel? And if so - is it documented anywhere? I'm not talking about limit on returned rows as many people discussed before. I'm talking ab... See more...
Hi there. Is there a SPL query length limit on a dashboard panel? And if so - is it documented anywhere? I'm not talking about limit on returned rows as many people discussed before. I'm talking about the length limit for a search associated with a panel. The case looks like this - the customer has a dashboard with some panels. The panels seem to work but: 1) If you click on "open in search" you get 400 Bad request 2) If you try to edit the dashboard and get into the panel details, the query is truncated It looks as if there was some limit imposed at some time (after some upgrade?) which doesn't let us edit the queries anymore. And yes, the query is horribly ugly and calls for optimization but that's another story and we'll get to it.
i was trying to connect MongoDB  collection data to my splunk index by using splunk DB Connect 3.5.1 , all the process has done nicely there is no issues with connection and jre path and all , at las... See more...
i was trying to connect MongoDB  collection data to my splunk index by using splunk DB Connect 3.5.1 , all the process has done nicely there is no issues with connection and jre path and all , at last at the time of finishing the setup in data lab i  got an error     " There was an error processing your request. It has been logged (ID XXXXXXXXXXXXXXX).   " can anyone please help me out. Thank you.
I am getting an error **"Search auto-canceled"**  after upgrading to OEL7 on search file. 05-18-2021 14:15:40.913 INFO LocalCollector - Final required fields list = EventData_Xml,Failure_Informa... See more...
I am getting an error **"Search auto-canceled"**  after upgrading to OEL7 on search file. 05-18-2021 14:15:40.913 INFO LocalCollector - Final required fields list = EventData_Xml,Failure_Information,HOST_NAME,INSTANCE_NAME,Message,RenderingInfo_Xml,ServerName,System_Props_Xml,_bkt,_cd,_raw,_si,_subsecond,database_instance,host,index,linecount,source,sourcetype,splunk_server 05-18-2021 14:15:40.913 INFO UserManager - Unwound user context: elon.musk -> NULL 05-18-2021 14:15:40.913 INFO UserManager - Setting user context: elon.musk 05-18-2021 14:15:40.913 INFO UserManager - Done setting user context: NULL -> elon.musk 05-18-2021 14:15:40.913 INFO UserManager - Unwound user context: elon.musk -> NULL 05-18-2021 14:15:40.918 INFO TimelineCreator - Commit timeline at cursor=2147483647.000000 05-18-2021 14:15:41.210 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL 05-18-2021 14:15:41.210 INFO DispatchExecutor - User applied action=CANCEL while status=0 05-18-2021 14:15:41.210 ERROR SearchStatusEnforcer - sid:1621311338.53671 Search auto-canceled 05-18-2021 14:15:41.210 INFO SearchStatusEnforcer - State changed to FAILED due to: Search auto-canceled 05-18-2021 14:15:41.246 INFO TimelineCreator - Commit timeline at cursor=1621311335.000000 05-18-2021 14:15:41.282 WARN DownloadRemoteDataTransaction - Got status code: 404 ( Not Found) from https://10.18.271.155:8180/services/search/jobs/remote_google.com_16213012338.53671/search_telemetry.json 05-18-2021 14:15:41.282 WARN DownloadRemoteDataTransaction - Failed to download search.log from remote peer 'google.com', uri='https://10.18.271.155:8180', sid='remote_google.com_16213012338.53671' 05-18-2021 14:15:41.283 INFO ReducePhaseExecutor - Downloading all remote search.log / search_telemetry.json files took 0.037 seconds 05-18-2021 14:15:41.283 INFO ReducePhaseExecutor - Ending phase_1 05-18-2021 14:15:41.283 INFO UserManager - Unwound user context: elon.musk -> NULL 05-18-2021 14:15:41.283 ERROR SearchOrchestrator - Phase_1 failed due to : DAG Execution Exception: Search has been cancelled 05-18-2021 14:15:41.285 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL 05-18-2021 14:15:41.285 INFO DispatchExecutor - User applied action=CANCEL while status=3 05-18-2021 14:15:41.288 INFO DispatchStorageManager - Remote storage disabled for search artifacts. 05-18-2021 14:15:41.288 INFO DispatchManager - DispatchManager::dispatchHasFinished(id='16213012338.53671', username='elon.musk') 05-18-2021 14:15:41.289 INFO UserManager - Unwound user context: elon.musk -> NULL 05-18-2021 14:15:41.302 INFO UserManager - Unwound user context: elon.musk -> NULL 05-18-2021 14:15:42.301 INFO UserManager - Unwound user context: elon.musk -> NULL 05-18-2021 14:15:42.325 INFO UserManager - Unwound user context: elon.musk -> NULL 05-18-2021 14:15:42.338 ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: DAG Execution Exception: Search has been cancelled
Hello  Suppose  I have a dashboard with some filters and drilldowns as shown in below figure The drilldown is attached with another dashboard.Once i click on the drilldown in directs to another das... See more...
Hello  Suppose  I have a dashboard with some filters and drilldowns as shown in below figure The drilldown is attached with another dashboard.Once i click on the drilldown in directs to another dashboard..if i select any dropdown filter suppose if i select  filter Grouping category(ModuleName) as  in fig 1..same select option should reflect  in the  filter Grouping category filter of drilldown dashboard in fig 2.                                                                      Fig-1:Main dashboard                                                                                           Fig-2 Drilldown dashboard Thank you in advance
I have the following inputs.conf in the UF for Splunk_TA_windows. My intension is to send a copy of logs into two different indexers, I am aware of license re-use but I am ok with that. With the bel... See more...
I have the following inputs.conf in the UF for Splunk_TA_windows. My intension is to send a copy of logs into two different indexers, I am aware of license re-use but I am ok with that. With the below config some logs are going to one index and other logs are going to other index. When I compare the logs in index wineventlog and testsys they are not identical, the logs that I see in wineventlog are different and testsys are different. Looks like some are pushed to one index while other are pushed to    ###### Windows OS Logs ############## [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = testsys renderXml = false [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = wineventlog renderXml = false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" index = testsys renderXml = false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" index = wineventlog renderXml = false [WinEventLog://System] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = testsys renderXml = false [WinEventLog://System] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = wineventlog renderXml = false [WinEventLog://Microsoft-Windows-DriverFrameworks-UserMode/Operational] disabled = 0
Hi, I have created a KVstore in Search Head deployer, that KVstore is not replicated to Search heads. The below setting is given as "true" in Search Head deployer. conf_replication_include.lookups... See more...
Hi, I have created a KVstore in Search Head deployer, that KVstore is not replicated to Search heads. The below setting is given as "true" in Search Head deployer. conf_replication_include.lookups = true What else need to be changed?  
i am trying to update savedsearches.conf file under app cisco-app-ACI via deployer.  When i push the changes it gets reflected well under below folder ./etc/apps/deployment-apps/cisco-app-ACI/local... See more...
i am trying to update savedsearches.conf file under app cisco-app-ACI via deployer.  When i push the changes it gets reflected well under below folder ./etc/apps/deployment-apps/cisco-app-ACI/local/ But not under  ./etc/apps/cisco-app-ACI/local/ How can i get the modifications updated under /etc/apps/cisco-app-ACI/local/ ? Please help. 
Hi, I cannot found any similar thread on this issue, my aim is to display fields with different values between 2 row, so my problem will be... my search... statistics view from my search produ... See more...
Hi, I cannot found any similar thread on this issue, my aim is to display fields with different values between 2 row, so my problem will be... my search... statistics view from my search product color product_id description1 description2 description3 description4 phone blue tag_1 pass pass   fail  phone blue tag_2 fail pass pass fail   Desired_outcome 1) product color product_id description1 description3 phone blue tag_1 pass   phone blue tag_2 fail pass   or 2)  if option 1 not achievable,  maybe this work as well. product_id description1 description3 tag_1 pass   tag_2 fail Pass   Appreciate your help.
I am running a query to parse a two-level nested JSON that takes out only the second level dict and puts it in the form of a column.  The query works perfectly. However, when I run it, I get this err... See more...
I am running a query to parse a two-level nested JSON that takes out only the second level dict and puts it in the form of a column.  The query works perfectly. However, when I run it, I get this error message from Splunk  This is the query base search | spath | foreach *.* [| eval unknown=if(isnull(unknown),"<<MATCHSEG1>>",mvdedup(mvappend(unknown,"<<MATCHSEG1>>")))] | fields unknown | mvexpand unknown | eval _raw=replace(_raw,"\"".unknown."\"","\"known\"") | spath path=known| spath input=known | table COLUMN1, COLUMN2,......COLUMN25 "The search you ran returned a number of fields that exceeded the current indexed field extraction limit. To ensure that all fields are extracted for search, set limits.conf: [kv] / indexed_kv_limit to a number that is higher than the number of fields contained in the files that you index." Could you advise on how I can resolve this issue, please? I am not sure of the no of fields that my query will generate. Any dynamic limit that I can see? Your help is much appreciated.