All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @manideepa  Are you referring to service indicators in the glass tables versus notables generated in a table? Please could you share screenshots or sample data so that we can ensure we're giving ... See more...
Hi @manideepa  Are you referring to service indicators in the glass tables versus notables generated in a table? Please could you share screenshots or sample data so that we can ensure we're giving you the best answer.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dlm  Im not entirely sure what it is you're trying to achieve so this might not be the best way to achieve it, but hoepfully one of the below examples might help!  If you can give us more detai... See more...
Hi @dlm  Im not entirely sure what it is you're trying to achieve so this might not be the best way to achieve it, but hoepfully one of the below examples might help!  If you can give us more details (ideally with examples) then we might be able to give a better specific answer   I started by creating a lookup: The examples work around using a subsearch to get the list from the lookup Option 1: This adds a prefix of my_ to the fields listed in the lookup | makeresults | eval CPU=45, Memory=12.3, Disk=84.4, Network=92 | rename [| inputlookup fields.csv | eval fieldName=fieldName+" AS my_"+fieldName | stats list(fieldName) as search ]   Option 2: This uses "table" to only list the fields in the lookup, with an optional field showing the fields (example of foreach) | makeresults | eval CPU=45, Memory=12.3, Disk=84.4, Network=92 | table [| inputlookup fields.csv | stats list(fieldName) as search] | foreach * [| eval fields=mvappend(fields,"<<FIELD>>")]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hey, thanks for your answer. After i posted this, i went to investigate the source of the data and any props or transforms set up for it.  I ran the following from our forwarder, the server that has... See more...
Hey, thanks for your answer. After i posted this, i went to investigate the source of the data and any props or transforms set up for it.  I ran the following from our forwarder, the server that has the netskope TA app installed on it.   ./splunk btool props list --debug | grep "netskope:application"   I dont have any transforms with that tag.  Here is the output of the default netskope application inputs: [source::...netskope_file_hash_modalert.log*] SHOULD_LINEMERGE = true sourcetype = tanetskopeappforsplunk:log TZ = UTC [source::...netskope_url_modalert.log*] SHOULD_LINEMERGE = true sourcetype = tanetskopeappforsplunk:log TZ = UTC [source::...ta-netskopeappforsplunk*.log*] SHOULD_LINEMERGE = true sourcetype = tanetskopeappforsplunk:log TZ = UTC [source::...ta_netskopeappforsplunk*.log*] SHOULD_LINEMERGE = true sourcetype = tanetskopeappforsplunk:log TZ = UTC [netskope:event:v2] SHOULD_LINEMERGE = 0 category = Splunk App Add-on Builder pulldown_type = 1 [netskope:alert:v2] SHOULD_LINEMERGE = 0 category = Splunk App Add-on Builder pulldown_type = 1 [netskope:web_transaction] INDEXED_EXTRACTIONS = W3C TIME_FORMAT = %Y-%m-%d %H:%M:%S TZ = Etc/GMT SHOULD_LINEMERGE = 0 TRUNCATE = 999999 EXTRACT-from_source = .*[\\\/](?<input_name>.*)_(?<bucket_name>\d{8})_(?<bucket_file_name>.*) in source EVAL-vendor_product = "Netskope" FIELDALIAS-app = x_cs_app AS app FIELDALIAS-timestamp = _time as timestamp FIELDALIAS-bytes_in = cs_bytes AS bytes_in FIELDALIAS-bytes_out = sc_bytes AS bytes_out FIELDALIAS-category = x_category AS category FIELDALIAS-dest = s_ip AS dest EVAL-http_content_type = coalesce(cs_content_type, sc_content_type) FIELDALIAS-http_method = cs_method AS http_method FIELDALIAS-http_referrer = cs_referer AS http_referrer FIELDALIAS-http_user_agent = cs_user_agent AS http_user_agent FIELDALIAS-response_time = time_taken AS response_time FIELDALIAS-src=c_ip AS src FIELDALIAS-status = sc_status AS status FIELDALIAS-uri_path = cs_uri AS uri_path FIELDALIAS-uri_query = cs_uri_query AS uri_query FIELDALIAS-user = cs_username AS user EVAL-fullurl = cs_uri_scheme . "://" . cs_dns . cs_uri . if(isnull(cs_uri_query), "", "?") . coalesce(cs_uri_query,"") EVAL-x_c_browser=if(isnull(x_c_browser),"N/A",x_c_browser) EVAL-x_c_device=if(isnull(x_c_device),"N/A",x_c_device) FIELDALIAS-dest_port = cs_uri_port AS dest_port EVAL-url = cs_uri_scheme . "://" . cs_dns . cs_uri . if(isnull(cs_uri_query), "", "?") . coalesce(cs_uri_query,"") FIELDALIAS-duration = time_taken AS duration FIELDALIAS-http_referrer_domain = cs_referer AS http_referrer_domain EVAL-site = replace(cs_host, "^([^\.]+).*", "\1") [source::netskope_events_v2_connection] KV_MODE = json sourcetype = netskope:connection TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s SHOULD_LINEMERGE = false TRUNCATE = 999999 [source::...*events_iterator_page*.csv] INDEXED_EXTRACTIONS = CSV sourcetype = netskope:connection TIMESTAMP_FIELDS=timestamp TIME_FORMAT = %s SHOULD_LINEMERGE = false TRUNCATE = 999999 [netskope:connection] FIELDALIAS-src_ip = srcip AS src_ip FIELDALIAS-src=srcip AS src FIELDALIAS-dest_ip = dstip AS dest_ip FIELDALIAS-dest = dstip AS dest EVAL-dvc = coalesce(hostname, device) EVAL-app_session_key = app_session_id . "::" . host EVAL-vendor_product = "Netskope" FIELDALIAS-page_duration = page_duration AS duration FIELDALIAS-bytes = numbytes AS bytes FIELDALIAS-in_bytes = client_bytes AS bytes_in FIELDALIAS-category = appcategory AS category FIELDALIAS-out_bytes = server_bytes AS bytes_out FIELDALIAS-http_referrer = useragent AS http_user_agent EVAL-http_user_agent_length = len(useragent) FIELDALIAS-page = page AS url FIELDALIAS-src_location = src_location AS src_zone FIELDALIAS-dest_location = dst_location AS dest_zone EVAL-url_length = len(page) # from netskope:web EVAL-action = if(isnotnull(action),action,"isolate") FIELDALIAS-oc = object_type AS object_category FIELDALIAS-fu = from_user AS src_user [netskope:audit] SHOULD_LINEMERGE = false TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s TRUNCATE = 999999 KV_MODE = json EVAL-vendor_product = "Netskope" # acl_modified, cleared, created, deleted, modified, read, stopped, updated EVAL-action = case(match(audit_log_event,"create|Create"),"created", match(audit_log_event,"granted"), "acl_modified", match(audit_log_event, "ack|Ack"), "cleared", match(audit_log_event, "delete|Delete"), "deleted", match(audit_log_event,"edit|Edit|Add"),"modified",match(audit_log_event,"Push|push|Reorder|update|Update"),"updated",match(audit_log_event,"Disable|disable"), "stopped",1=1,"unknown") EVAL-status = case(match(audit_log_event,"success|Success"),"success",match(audit_log_event,"fail|Fail"),"failure",1=1,"unknown") FIELDALIAS-severity_id = severity_level AS severity_id FIELDALIAS-data_type = supporting_data.data_type AS object FIELDALIAS-date_type_attr = supporting_data.data_values{} AS object_attrs FIELDALIAS-object_cat = category AS object_category FIELDALIAS-result = audit_log_event AS result [source::netskope_events_v2_application] KV_MODE = json TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s sourcetype = netskope:application SHOULD_LINEMERGE = false TRUNCATE = 999999 [source::...*events_iterator_application*.csv] INDEXED_EXTRACTIONS = CSV sourcetype = netskope:application TIMESTAMP_FIELDS=timestamp TIME_FORMAT = %s SHOULD_LINEMERGE = false TRUNCATE = 999999 [netskope:application] FIELDALIAS-src_ip = srcip AS src_ip FIELDALIAS-src=srcip AS src FIELDALIAS-dest_ip = dstip AS dest_ip FIELDALIAS-dest = dstip AS dest EVAL-dvc = coalesce(hostname, device) FIELDALIAS-src_location = src_location AS src_zone FIELDALIAS-dest_location = dst_location AS dest_zone FIELDALIAS-signature = policy AS signature EVAL-file_hash = coalesce(local_sha256, local_md5) FIELDALIAS-file_name = filename AS file_name EVAL-app_session_key = app_session_id . "::" . host EVAL-vendor_product = "Netskope" FIELDALIAS-oc = object_type AS object_category FIELDALIAS-fu = from_user AS src_user [source::netskope_events_v2_network] KV_MODE = json TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s sourcetype = netskope:network SHOULD_LINEMERGE = false TRUNCATE = 999999 [source::...*events_iterator_network*.csv] INDEXED_EXTRACTIONS = CSV sourcetype = netskope:network TIMESTAMP_FIELDS=timestamp TIME_FORMAT = %s SHOULD_LINEMERGE = false TRUNCATE = 999999 [netskope:network] FIELDALIAS-src_ip = srcip AS src_ip FIELDALIAS-src=srcip AS src FIELDALIAS-dest_ip = dstip AS dest_ip FIELDALIAS-dest = dstip AS dest EVAL-dvc = coalesce(hostname, device) EVAL-vendor_product = "Netskope" FIELDALIAS-bytes = numbytes AS bytes FIELDALIAS-in_bytes = client_bytes AS bytes_in FIELDALIAS-out_bytes = server_bytes AS bytes_out FIELDALIAS-packets_in = client_packets AS packets_in FIELDALIAS-packets_out = server_packets AS packets_out FIELDALIAS-src_port = srcport AS src_port FIELDALIAS-dest_port = dstport AS dest_port FIELDALIAS-session_id = network_session_id AS session_id FIELDALIAS-duration = session_duration AS duration [netskope:incident] SHOULD_LINEMERGE = false TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s TRUNCATE = 999999 KV_MODE = json FIELDALIAS-signature_id = internal_id AS signature_id FIELDALIAS-action = dlp_match_info{}.dlp_action AS action FIELDALIAS-object_path = url AS object_path FIELDALIAS-object_category = true_obj_category AS object_category FIELDALIAS-signature = title AS signature FIELDALIAS-src=src_location AS src FIELDALIAS-src_user = from_user AS src_user FIELDALIAS-dest = dst_location AS dest # FIELDALIAS-user = to_user AS user EVAL-user = coalesce(user, to_user) EVAL-vendor_product = "Netskope" [source::netskope_alerts_v2] KV_MODE = json TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s sourcetype = netskope:alert SHOULD_LINEMERGE = false TRUNCATE = 999999 [source::...*alerts_iterator*.csv] INDEXED_EXTRACTIONS = CSV SHOULD_LINEMERGE = false TIMESTAMP_FIELDS=timestamp TIME_FORMAT = %s sourcetype = netskope:alert TRUNCATE = 999999 [netskope:alert] EVAL-dvc = coalesce(hostname, device) EVAL-vendor_product = "Netskope" EVAL-severity_id = coalesce(severity_id, severity_level_id) EVAL-severity = coalesce(severity_level, dlp_rule_severity, dlp_severity, mal_sev, malware_severity, severity, severity_level) EVAL-object_path = if(file_path="NA", object, coalesce(file_path, object)) FIELDALIAS-id = internal_id AS id FIELDALIAS-srcip = srcip AS src FIELDALIAS-dstip = dstip AS dest EVAL-file_hash = coalesce(local_sha256, local_md5) FIELDALIAS-signature = alert_name AS signature FIELDALIAS-oc = object_type AS object_category FIELDALIAS-fu = from_user AS src_user FIELDALIAS-src_location = src_location AS src_zone FIELDALIAS-dest_location = dst_location AS dest_zone FIELDALIAS-file_name = filename AS file_name [netskope:infrastructure] SHOULD_LINEMERGE = false TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s TRUNCATE = 999999 KV_MODE = json FIELDALIAS-device = device_name AS device EVAL-app = "Netskope" EVAL-vendor_product = "Netskope" [netskope:endpoint] SHOULD_LINEMERGE = false TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %s TRUNCATE = 999999 KV_MODE = json EVAL-vendor_product = "Netskope" [netskope:clients] KV_MODE = json FIELDALIAS-make = attributes.host_info.device_make AS make FIELDALIAS-model = attributes.host_info.device_model AS model FIELDALIAS-os = attributes.host_info.os AS os FIELDALIAS-ver = attributes.host_info.os_version AS version FIELDALIAS-name = attributes.host_info.hostname AS dest FIELDALIAS-user = attributes.users{}.username AS user EVAL-vendor_product = "Netskope" SHOULD_LINEMERGE = false TIME_PREFIX = "timestamp": MAX_TIMESTAMP_LOOKAHEAD = 35 TIME_FORMAT = %s TRUNCATE = 999999 [netskope:api] KV_MODE = json EVAL-vendor_product = "Netskope" [netskope:alertaction:file_hash] FIELDALIAS-action_status = status AS action_status FIELDALIAS-action_name = orig_action_name AS action_name [netskope:alertaction:url] FIELDALIAS-action_status = status AS action_status FIELDALIAS-action_name = orig_action_name AS action_name # For proper ingestion of Alert action events used in Splunk ES App [source::...stash_common_action_model] sourcetype=stash_common_action_model [stash_common_action_model] TRUNCATE = 0 # only look for ***SPLUNK*** on the first line HEADER_MODE = firstline # we can summary index past data, but rarely future data MAX_DAYS_HENCE = 2 MAX_DAYS_AGO = 10000 # 5 years difference between two events MAX_DIFF_SECS_AGO = 155520000 MAX_DIFF_SECS_HENCE = 155520000 TIME_PREFIX = (?m)^\*{3}Common\sAction\sModel\*{3}.*$ MAX_TIMESTAMP_LOOKAHEAD = 25 LEARN_MODEL = false # break .stash_new custom format into events SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = false LINE_BREAKER = (\r?\n==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##==\r?\n) TRANSFORMS-0parse_cam_header = orig_action_name_for_stash_cam,orig_sid_for_stash_cam,orig_rid_for_stash_cam,sourcetype_for_stash_cam TRANSFORMS-1sinkhole_cam_header = sinkhole_cam_header     Looking and running your suggested command (Good command btw), i get the following output: I don't see any evidence of us modifying or creating a dlp_rule value. I had specifically mapped the dlp_rule to these values below: These are the values I was seeing. I was using this mapping and values in every other query as well, so i must have seen them.  This is the default netskope app. I also looked at any possible sourcetypes or transforms via the gui, and I didn't see any. I am working on this data with a coworker that has insight into the Netskope portal, and he said that the dlp_role field is blank there as well. If the data had changed, the old data shouldn't have changed. I haven't updated the netskope app.    There are too many fields to paste in here for the logs themselves, but here are the fields we are looking at: dlp_fail_reason: dlp_file: dlp_incident_id: 0 dlp_is_unique_count: dlp_mail_parent_id: dlp_parent_id: 0 dlp_profile: dlp_rule: dlp_rule_count: 0 dlp_rule_severity: dlp_scan_failed: dlp_unique_count: 0 dst_country: US dst_geoip_src: 0 dst_latitude: 7.40594 dst_location: Mow dst_longitude: -1.1551 dst_region: C dst_timezone: America/ dst_zipcode: N/A dsthost: dstip: 1.5.5.5 dstport: 455   With this specific dashboard and use case, I am searching for all time. And the field in general is blank. We only get 3 dlp_rule values, and the rest, 99% are blank.  Not sure how to track down if the data set changed due to me searching for all time right now.    Thanks for any guidance 
Hi @g_cremin  I believe "actions" should be an array of actions, not a dict? This might be affecting things. ... "actions": [ { "action":"test_connectivity", "identifier": "tes... See more...
Hi @g_cremin  I believe "actions" should be an array of actions, not a dict? This might be affecting things. ... "actions": [ { "action":"test_connectivity", "identifier": "test_connectivity", "description": "Tests connectivity to Wazuh", "type": "test", "read_only": true, "parameters": [], "output": [] } ], ... For more detail on the app.json schema check out https://docs.splunk.com/Documentation/SOAR/current/DevelopApps/Metadata  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi To reset the admin password ensure you are stopping Splunk completely before deleting the passwd file. # Stop Splunk Enterprise cd $SPLUNK_HOME/bin ./splunk stop # Remove the password file rm $... See more...
Hi To reset the admin password ensure you are stopping Splunk completely before deleting the passwd file. # Stop Splunk Enterprise cd $SPLUNK_HOME/bin ./splunk stop # Remove the password file rm $SPLUNK_HOME/etc/passwd Now create a user-seed file ($SPLUNK_HOME/etc/user-seed.conf [user_info] USERNAME = admin PASSWORD = YourPassword Once done, start Splunk $SPLUNK_HOME/bin/splunk start You should now be able to login with the user/password set in the user-seed.conf file  For more info check the following docs page: https://docs.splunk.com/Documentation/Splunk/latest/Security/Secureyouradminaccount#Reset_the_administrator_password Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Abass42  You're right in that editing historic data in Splunk isnt really possible. (You can delete data if you have the can_delete capability though).  What I'm wondering is that one of 2 thin... See more...
Hi @Abass42  You're right in that editing historic data in Splunk isnt really possible. (You can delete data if you have the can_delete capability though).  What I'm wondering is that one of 2 things may have happened. 1) The data has changed 2) Your field extractions have changed. They ultimately boil down to the same question - How does the "dlp_rule" field get defined? Is this an actual value in the _raw data (such as [time] - component=something dlp_rule=ABC user=Bob host=BobsLaptop ) OR is dlp_rule actually determined/eval/extracted from other data in the event such as a status code, or maybe a regular expression? If this is the case then the questions become, has the data format changed slightly? This could be something simple as an additional space or field in the raw data which has stopped the field extraction working, or, has the field extraction been changed at all? If you're able to provide a sample event then it might help - redacted of course. Another thing which you could do if you are unsure on what fields are extracted etc is run a btool on your SearchHead (if you are running onprem) such as: /opt/splunk/bin/splunk cmd btool props list netskope:application  Are you able to look at a raw historical event where you go a match you expected and compare to a recent event to see if there are any differences in the event?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @T2  The Cisco Security Cloud app does have a Duo Overview (Dashboard Studio) dashboard but this is only high level  and not the same as the 7 (Classic XML) dashboards in the Duo app. The Duo app... See more...
Hi @T2  The Cisco Security Cloud app does have a Duo Overview (Dashboard Studio) dashboard but this is only high level  and not the same as the 7 (Classic XML) dashboards in the Duo app. The Duo app uses a static source=duo and a macro to define the Duo index, whereas the Cisco Security Cloud app uses sourcetypes such as "cisco:duo:authentication" and also a Data Model for consuming the data via the overview dashboard. Ultimately I think the answer is Yes - If you have dashboards/searches built on the existing Duo app feed then you are likely going to need to update these to reflect the data coming in via the new app. I would recommend running the Cisco app in development environment or locally, if possible, so that you can compare the data side-by-side and work to retain parity between the apps before migrating your production environment.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, all i am fairly new to the Splunk community and I'm attempting to reset my Splunk admin password and for whatever reason it does not work i go and delete the "etc/passwd" and restart my Splunk... See more...
Hello, all i am fairly new to the Splunk community and I'm attempting to reset my Splunk admin password and for whatever reason it does not work i go and delete the "etc/passwd" and restart my Splunk instance and attempt to login to the web interface, but it never prompts me for a reset. I have even tried commands to do it manually, but nothing works. Has anyone else had a problem like this? 
Here is instructions how todo it https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Manageknowledgeobjectpermissions#Enable_a_role_other_than_admin_and_power_to_set_permissions_and_share_ob... See more...
Here is instructions how todo it https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Manageknowledgeobjectpermissions#Enable_a_role_other_than_admin_and_power_to_set_permissions_and_share_objects This is not a capability, instead of it, it’s needing a role which have write access to current app where dashboard is and also user must own this dashboard. Only users with admin role can share other people’s KOs.
Will any of the knowledge objects or dashboards be affected once the add-on is applied when moving from DUO to Cisco?
Does this lantern article helps you? Watch also the video clip https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise. Another example how t... See more...
Does this lantern article helps you? Watch also the video clip https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise. Another example how to use it with pan logs https://lantern.splunk.com/Data_Descriptors/Palo_Alto_Networks/Using_ingest_actions_to_filter_Palo_Alto_logs.
It sounds like you've settled on what might be a unsuitable solution to the problem.  Tell us more about the problem itself and we may be able to suggest a better solution. Lookup tables are for enr... See more...
It sounds like you've settled on what might be a unsuitable solution to the problem.  Tell us more about the problem itself and we may be able to suggest a better solution. Lookup tables are for enriching events with additional fields based one or more fields already in the events.  It's not a conditional-execution mechanism. If this part of a dashboard (or can be made into a dashboard) then you have better options.  You can have inputs the user can select to determine which calculations are made.  That is well-trodden ground so let us know if that path sounds feasible.
To add a bit of additional context to what's already been said - actually while most of the "other" Splunk components should be able to communicate with each other (or at least should be able to be a... See more...
To add a bit of additional context to what's already been said - actually while most of the "other" Splunk components should be able to communicate with each other (or at least should be able to be able), forwarders are often (usually) in remote sites and environments which are completely separate from the "main" Splunk infrastructure so in many cases querying them directly doesn't make much sense. So yes, for _some_ HFs a separate role could be beneficial but there can be many HFs (and most UFs) which you should simply have no access to. And that's also why app management with DS works in pull mode - you serve your apps from the DS but it's the deployment clients (usually forwarders) which pull their apps from DS and you have no way of forcing them to do so. They have their interval with which they "phone home" and that's it.
Ingest actions are exactly what I am looking for. While not as intuitive as an entire tool dedicated to modifying data, I think this would do the trick, as long as i can trim out field values before ... See more...
Ingest actions are exactly what I am looking for. While not as intuitive as an entire tool dedicated to modifying data, I think this would do the trick, as long as i can trim out field values before forwarding them to an indexer for ingestion.    I am looking for docs explaining how to do just that, but I am struggling to find step by step instructions. Can you send me some good docs that show how to use expressions to do what I want.    Thank you. This may be my ticket to saving us hundreds of thousands of dollars in licensing costs. 
yeah we make adjustments with new indexes, however, the large indexes were created before i got hired. so im actively trying to reduce ingest with whats already flowing. great advice btw.
Brett - do you have any further guidance on making this app (7371) work?  We are trying to ingest Atlassian logs from a trusted partner to our Splunk.  They pointed us to APP 7371, we installed.  But... See more...
Brett - do you have any further guidance on making this app (7371) work?  We are trying to ingest Atlassian logs from a trusted partner to our Splunk.  They pointed us to APP 7371, we installed.  But don't see any options for configuration?  Not like we're used to with other apps, anyway.  No "input" tab, no "configuration" tab, no "proxy" tab.   We get one page with 'name', 'update checking', 'visible' and 'upload asset' .  nothing else.  no place to enter the API key they sent us, nowhere to enter file path.  Nothing.  At this point we have the app installed but no idea how to get the logs to come over.
It should be stated up-front that indexes cannot be reduced in size.  You must wait for buckets to be frozen for data to go away.  The best you can do is reduce how much is stored in new buckets. Yo... See more...
It should be stated up-front that indexes cannot be reduced in size.  You must wait for buckets to be frozen for data to go away.  The best you can do is reduce how much is stored in new buckets. You've already taken a good first step by eliminating duplicate events. Next, look at indexed fields.  Fields are best extracted at search-time rather than at index-time.  Doing so helps indexer performance, saves space in the indexes, and offers more flexibility with fields. Look at the INDEXED_EXTRACTIONS settings in your props.conf files. Each of them will create index-time fields.  JSON data is especially verbose so KV_MODE=json should be used, instead.
I totally agree with @PickleRick you should disable your swap at least temporarily and after you have confirmed that everything is working and/or fix the root cause for swap usage then remove it perm... See more...
I totally agree with @PickleRick you should disable your swap at least temporarily and after you have confirmed that everything is working and/or fix the root cause for swap usage then remove it permanently. When you have dedicated servers for splunk those should sized correctly to run your normal workload. 
You need to remember that setting a new sourcetype value for your event, don’t start to travel ingesting pipeline again! So don’t expect that setting sourcetype as B then it apply those definitions t... See more...
You need to remember that setting a new sourcetype value for your event, don’t start to travel ingesting pipeline again! So don’t expect that setting sourcetype as B then it apply those definitions to that event. No it just go forward with sourcetype A settings.
You could also use site0 as site information instead of site1 or site2. Then it manages searches little bit differently than using exact site<#>. You found more information from those docs which are ... See more...
You could also use site0 as site information instead of site1 or site2. Then it manages searches little bit differently than using exact site<#>. You found more information from those docs which are pointed to you.