All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Great. Could you please share what you have found. I would like to see it. Thanks
Hi @Hemant93, Masking sensitive data is typically performed on the Heavy Forwarder / Indexer before it goes into the Splunk index. We can do that job with a props.conf file. [maskpii] SEDCMD-pii-do... See more...
Hi @Hemant93, Masking sensitive data is typically performed on the Heavy Forwarder / Indexer before it goes into the Splunk index. We can do that job with a props.conf file. [maskpii] SEDCMD-pii-dob = s/@dob=['"][^'"]+['"]/@dob='***MASKED***'/g SEDCMD-pii-ssn = s/@ssn=['"][^'"]+['"]/@ssn='***MASKED***'/g This file uses the maskpii sourcetype, and tells Splunk to change any dob or snn value to "***MASKED***". Put that props file on either the heavy forwarder or indexer (wherever your data is sent first) and restart Splunk. Using that file I ingested your sample data and here's the result:    
And what are your site RF/SF settings and how many indexers do you have in each site?
Thanks. @bowesmana solution seems to work. But it seems, processing it externally is more efficient. many thanks to all.
thank you for your answer, @jenniandthebets    I did not want the playbook to run at at all but I see that there is no way out of it. if that is the case, why is there an option to choose tags whe... See more...
thank you for your answer, @jenniandthebets    I did not want the playbook to run at at all but I see that there is no way out of it. if that is the case, why is there an option to choose tags when creating a playbook? 2. as for the datapath, I believe the filter should be [in] and not "==". anyway it did not work well for me all the time, I found that if I first create a simple code block that only gets the "container:tags" as input at outputs it as a variable, only then the filter works well. am I the only one it happens to?  
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. A Rolling restart and CM  restart (splunkd) had no effect. Got 3 SF... See more...
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. A Rolling restart and CM  restart (splunkd) had no effect. Got 3 SF tasks in pending with the same message : Missing enough suitable candidates to create a replicated copy in order to meet replication policy. Missing={ site2:1 } I have tried Resync and roll it with no success.  In the details of the pending task, I can see that de bucket is only on one indexer, and not searchable on other indexers of the cluster. My SF = 2 and RF = 2. Id like to be clean before decomissionning the next indexer.  Any advice or help will be hightly appreciate in order to retrieve my SF/RP (it is a production issue) Thanks by advance
Multivalued fields are separate entities which means Splunk doesn't keep any "connection" between values in those fields. For Splunk each field is just a single "multivalued value" (yes, I know it so... See more...
Multivalued fields are separate entities which means Splunk doesn't keep any "connection" between values in those fields. For Splunk each field is just a single "multivalued value" (yes, I know it sounds bad ;-)). So you have to manually combine those values. One solution @ITWhisperer already showed but for me it's a bit "brute force". My idea of a more "splunky" approach to splitting those products and product_prices would be to do | eval zipped=mvzip(products,product_prices,":") | mvexpand zipped | eval zipped=split(zipped,":") | eval products=mvindex(zipped,0), product_prices=mvindex(zipped,1) Then you can do your stats
| eval row=mvrange(0,mvcount(products)) | mvexpand row | eval products=mvindex(products, row) | eval product_prices=mvindex(product_prices, row) | stats count(products) by products,product_prices
The cloud is managed by clever automation on Splunk's side so the apps you upload to Cloud land on indexers as well. So the proper way to define index-time props and transforms is to just make an app... See more...
The cloud is managed by clever automation on Splunk's side so the apps you upload to Cloud land on indexers as well. So the proper way to define index-time props and transforms is to just make an app with those settings and install it on your Cloud instance.
This is a very old thread. You're unlikely to get an answer from people posting here. It's best if you create a new thread, possibly linking to this one for reference.
So, I've been away from Splunk for several years now, and now re-visiting it.  I've got a scenario where I would like to track certain metrics from imported data.  I created a simple CSV with just a ... See more...
So, I've been away from Splunk for several years now, and now re-visiting it.  I've got a scenario where I would like to track certain metrics from imported data.  I created a simple CSV with just a few entries to demonstrate the issues I'm having. Below is the source data I created: customer_id Time customer_fname customer_lname products product_prices 111 12/1/2023 John Doe product_100,product_200 100,200 222 12/11/2023 Suzy Que product_100 100 333 12/15/2023 Jack Jones product_300 300 111 12/18/2023 John Doe product_400 400   In this scenario this is just examples of customers and the items they purchased and the price paid.   After uploading the file and displaying the data in a table it looks as expected: source="test_sales.csv" | table customer_id,customer_fname,customer_lname,products,product_prices Upon using makemv to convert "products" and "product_prices" to multi-value fields, again the results are as expected and the product and price align since they were input into the source CSV in the proper order: source="test_sales.csv" | makemv delim="," products | makemv delim="," product_prices | table customer_id,customer_fname,customer_lname,products,product_prices   Here is where my issue is,  Is there a way to tie the product for a purchase transaction  in the multi-value "products" column to it's corresponding price in the multi-value "product_prices" column? Everything seems to work except when I try to so something like listing the products by price for the multi-value fields like this: source="test_sales.csv" | makemv delim="," products | makemv delim="," product_prices | stats count(products) by products,product_prices In the above results you can see that I'm getting results that are not exactly what I would want.  Ex.  it shows: 3 instances of product_100 at a price of 100, should only be 2 instances 2 instances of product_100 at a price of 200, should be 0 instances of this combination 2 instances of product_200 at a price of 100, should be 0 instances of this combination 2 instances of product_200 at a price of 200, should only be 1 instance   I'm likely approaching this incorrectly or using the wrong tool for the task,  any help to get me on the right track would be appreciated.   Thanks
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. Rolling restart and restart CM have no effect. Got 3 SF tasks in pe... See more...
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. Rolling restart and restart CM have no effect. Got 3 SF tasks in pending with the same message as mentionned in this post.  Id like to be clean before decomissionning another indexer.  Any help will be appreciate in order to retrieve my SF/RP Thanks
Hi Team, I have two dashboards designed for specific sets of locations. My plan is to consolidate them into a single dashboard, utilizing filters to distinguish between different locations. For ins... See more...
Hi Team, I have two dashboards designed for specific sets of locations. My plan is to consolidate them into a single dashboard, utilizing filters to distinguish between different locations. For instance, locations aaa, bbb, ccc, and ddd pertain to the inContact application, while locations eee, fff, ggg, and hhh belong to the Genesys application. Location value is in marketing area field marketing-area": "aaa"  I need to enable a filter for both inContact and Genesys. Clicking on inContact should display relevant locations, and similarly, clicking on Genesys should show the corresponding locations. Please let me know if any input required.
Hi all,   I am coming from Splunk on-prem so this is a bit confusing to me. I have looked at architectures regarding Splunk Cloud and can't understand how data configs are done when using Splunk Cl... See more...
Hi all,   I am coming from Splunk on-prem so this is a bit confusing to me. I have looked at architectures regarding Splunk Cloud and can't understand how data configs are done when using Splunk Cloud. For example, let's say:   - You have a UF on a machine that forwards data to Splunk Indexers (cloud), you are to make a custom sourcetype for this specific piece of data. Where would you define the parsing rules for this if you don't manage the Indexers. Furthermore if the data can be on-boarded with a TA, how would you install this TA onto the indexers to assist with onboarding (assuming no need for HF)     Any help would be appreciated, thanks!
Wait a second. You did both on the same host? rpm and deb?
Hi @SplunkySplunk  All three are three big concepts and looks like you have done some studying on the Splunk docs(if not, the links are below).  maybe you should ask your requirements more clearly.... See more...
Hi @SplunkySplunk  All three are three big concepts and looks like you have done some studying on the Splunk docs(if not, the links are below).  maybe you should ask your requirements more clearly.. so there will be better answers. thanks.    https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Usesummaryindexing https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Aboutdatamodels https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Knowledge/Manageacceleratedsearchsummaries  
Hello. Im using Splunk cloud and thinking about add summary index or data model. I'm trying to understand the difference between the 3 options : summary index, report acceleration and data model. ... See more...
Hello. Im using Splunk cloud and thinking about add summary index or data model. I'm trying to understand the difference between the 3 options : summary index, report acceleration and data model. Can someone please explain to me what is the main purpose of each ? Using summary index is the best way to avoid performance issues with heavy searches ? How it works with summary index? should i create new index and run my dashboards on this index ? Thanks
I have updated the universal forwarder with RPM and deb packages and following commands: rpm -Uvh and dpkg -i
Sure. But how do you define "assets"? How do you differentiate between them? Because while you can use the general approach of combining two separate searches (either by means of append or multisear... See more...
Sure. But how do you define "assets"? How do you differentiate between them? Because while you can use the general approach of combining two separate searches (either by means of append or multisearch) with additional field to classify your results into one of two sets, there might be more effective ways in specific cases.