All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen s... See more...
Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen some people using this TA with v14 and is working fine for them.
Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen s... See more...
Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen some people using this TA with v14 and is working fine for them.  
Hello @shaunm001 , You should first check internal logs in Splunk using Query such as, index="_internal" *O365* *ERROR* Based on ERROR logs we can troubleshoot this further.
This worked well. Last question: If i wanted to ensure the single record that i find only comes from search 1 and not from search 2. how would i do that. Thanks again Todd
This worked well. Last question: If i wanted to ensure the single record that i find only comes from search 1 and not from search 2. how would i do that. Thanks again Todd
index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* ac... See more...
index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* action_status=* search_name=kafka* [| rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time AS "frequency", title AS "title", eai:acl.app AS "app", next_scheduled_time AS "nextRunTime", search AS "query", updated AS "lastUpdated", action.email.to AS "emailTo", action.email.cc AS "emailCC", action.email.subject AS "emailSubject", alert.severity AS "SEV" | eval severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | eval identifierDate=now() | convert ctime(identifierDate) AS identifierDate | table identifierDate title lastUpdated, nextRunTime, emailTo, query, severity, emailTo, actions | fillnull value="" | sort -lastUpdated actions] | table user search_name action_status date_month date_year _time
Hi @Karthikeya  It could be access permission to the extracted field. Go to the menu Settings > Fields, click on Field Extractions, and check if the permission for your field is correct. To ensure a... See more...
Hi @Karthikeya  It could be access permission to the extracted field. Go to the menu Settings > Fields, click on Field Extractions, and check if the permission for your field is correct. To ensure access for all users, set the app permissions to global and the Role permissions to Read for Everyone.
I'm struggling to get data in from Infoblox using Splunk Add-on for Infoblox.  I looked at the documentation and realized it doesn't support the current versions.  I'm using Infoblox NIOS 9.0.3.  The... See more...
I'm struggling to get data in from Infoblox using Splunk Add-on for Infoblox.  I looked at the documentation and realized it doesn't support the current versions.  I'm using Infoblox NIOS 9.0.3.  The Splunk documentation says it supports Infoblox NIOS 8.4.4, 8.5.2, 8.6.2. Specifically, it's not parsing correctly, and everything goes into sourcetype=infoblox:port. Are there any more current ways to get data in from Infoblox?  Can I get Splunk support to help me since it's a Splunk-supported Add-on?
How do I determine the server setting for my on-premise agent config trying to send data via HTTP from a Windows server to my new cloud instance? 
I have not found any new information.  I opened a support ticket to see if they could help.
This advice continues to be helpful, thank you!
I want to use the 2nd search as a subsearch only bringing back the actions. How can I do this? SEARCH | rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time A... See more...
I want to use the 2nd search as a subsearch only bringing back the actions. How can I do this? SEARCH | rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time AS "frequency", title AS "title", eai:acl.app AS "app", next_scheduled_time AS "nextRunTime", search AS "query", updated AS "lastUpdated", action.email.to AS "emailTo", action.email.cc AS "emailCC", action.email.subject AS "emailSubject", alert.severity AS "SEV" | eval severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | eval identifierDate=now() | convert ctime(identifierDate) AS identifierDate | table identifierDate title lastUpdated, nextRunTime, emailTo, query, severity, emailTo | fillnull value="" | sort -lastUpdated SUBSEARCH | rest "/servicesNS/-/-/saved/searches" timeout=300 splunk_server=* | search disabled=0 | eval length=len(md5(title)), search_title=if(match(title,"[-\\s_]"),("RMD5" . substr(md5(title),(length - 15))),title), user='eai:acl.owner', "eai:acl.owner"=if(match(user,"[-\\s_]"),rtrim('eai:acl.owner',"="),user), app_name='eai:acl.app', "eai:acl.app"=if(match(app_name,"[-\\s_]"),rtrim('eai:acl.app',"="),app_name), commands=split(search,"|"), ol_cmd=mvindex(commands,mvfind(commands,"outputlookup")), si_cmd=mvindex(commands,mvfind(commands,"collect")) | rex field=ol_cmd "outputlookup (?<ol_tgt_filename>.+)" | rex field=si_cmd "index\\s?=\\s?(?<si_tgt_index>[-_\\w]+)" | eval si_tgt_index=coalesce(si_tgt_index,'action.summary_index._name'), ol_tgt_filename=coalesce(ol_tgt_filename,'action.lookup.filename') | rex field=description mode=sed "s/^\\s+//g" | eval description_short=if(isnotnull(trim(description," ")),substr(description,0,127),""), description_short=if((len(description_short) > 126),(description_short . "..."),description_short), is_alert=if((((alert_comparator != "") AND (alert_threshold != "")) AND (alert_type != "always")),1,0), has_report_action=if((actions != ""),1,0) | fields + app_name, description_short, user, splunk_server, title, search_title, "eai:acl.sharing", "eai:acl.owner", is_scheduled, cron_schedule, max_concurrent, dispatchAs, "dispatch.earliest_time", "dispatch.latest_time", actions, search, si_tgt_index, ol_tgt_filename, is_alert, has_report_action | eval object_type=case((has_report_action == 1),"report_action",(is_alert == 1),"alert",true(),"savedsearch") | where is_alert==1 | eval splunk_default_app = if((app_name=="splunk_archiver" OR app_name=="splunk_monitoring_console" OR app_name="splunk_instrumentation"),1,0) | where splunk_default_app=0 | fields - splunk_server, splunk_default_app | search title=*kafka* | table actions title user
The Monitoring Console uses metrics data provided by servers with a splunk forwarder installed. The metrics data appears to use the hostname found on linux servers in the /etc/hostname file. However,... See more...
The Monitoring Console uses metrics data provided by servers with a splunk forwarder installed. The metrics data appears to use the hostname found on linux servers in the /etc/hostname file. However, our forwarders are set up with a hostname specified in the ../etc/system/local/inputs.conf where a "cname" for the host is specified. This results in a difference between the "host" used in searches and the "hostname" specified in the Monitoring Console dashboards and alerts. Is there a best practice for unifying  the host and hostname in the Monitoring Console?
Hi @ITWhisperer , Thanks for sharing.  I am okay with users. But we have few roles like engineer who should have access to all indexes. What can I do in this case? Can I give index names in drop-do... See more...
Hi @ITWhisperer , Thanks for sharing.  I am okay with users. But we have few roles like engineer who should have access to all indexes. What can I do in this case? Can I give index names in drop-down and pass that token in base search like index=$index_name$? Will it work?  BTW, is it a good practice to have a common dashboard with multiple indexes (may be 200+). It is okay for users who load Splunk because they are restricted to specific indexes. But what about the Enginner role and admin? Everytime we run the dashboard all indexes will be run by default (*) and will it be performance issues in Splunk? How to overcome this?
Hi @Gregory.Burkhead, Thank you for asking your question on Community. Since it's been a few days with no reply, did you happen to find any new information or a solution you can share? If you're... See more...
Hi @Gregory.Burkhead, Thank you for asking your question on Community. Since it's been a few days with no reply, did you happen to find any new information or a solution you can share? If you're still looking for help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
Hi @ckarthikin , sorry but the issue is at ingestion level: you have to assign a correctly defined sourcetype (standard or custom) to your data, then you can search your data correctly parsed and ag... See more...
Hi @ckarthikin , sorry but the issue is at ingestion level: you have to assign a correctly defined sourcetype (standard or custom) to your data, then you can search your data correctly parsed and aggregated. so the questions are the ones before: which technology? which add-on used for parsing? if none, you have to create a correct sourcetype and apply it to your data source. Ciao. Giuseppe
Hi @Alberto.Astolfi, Thank you so much for coming back and sharing the solution. 
I'm the only one with this issue? Ok, we made the decision to wipe the installations clean and installed 9.3.2. Configured deploymentclient.conf for several instances, the UI is now working fine.
Index access is controlled by role so if your separate groups of users as assigned different roles, with each role only able to access the indexes associated with their app then they can use a common... See more...
Index access is controlled by role so if your separate groups of users as assigned different roles, with each role only able to access the indexes associated with their app then they can use a common search which list all the indexes and they will each only be able to see the data from the indexes they have access to.
Assuming your events follow the pattern shown, you could try something like this | rex "[^\|]+\|(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{4})\|" | streamstats count(time) as eventnumber | stat... See more...
Assuming your events follow the pattern shown, you could try something like this | rex "[^\|]+\|(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{4})\|" | streamstats count(time) as eventnumber | stats values(time) as time list(_raw) as event by eventnumber | eval _time=strptime(time,"%F %T.%4N") This will also reset the _time timestamp to the same as found in the event data