All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The above response assumes Time is an absolute offset from Trigger Time and not the interval between samples. You can also extract the date and time from the source file name using a custom datetime... See more...
The above response assumes Time is an absolute offset from Trigger Time and not the interval between samples. You can also extract the date and time from the source file name using a custom datetime.xml configuration, but INGEST_EVAL is easier to maintain.
Hi @kyokei, The "Trigger Time" line will be lost to subsequent events after it's either discarded as a header or broken into an event. If you have the ability to manipulate the source file name, you... See more...
Hi @kyokei, The "Trigger Time" line will be lost to subsequent events after it's either discarded as a header or broken into an event. If you have the ability to manipulate the source file name, you can add the fractional seconds value the file name and reference the source when extracting timestamps: AUTO_231126_012051_500_0329.CSV With that change made, you can, for example, combine INDEXED_EXTRACTIONS with TRANSFORMS and INGEST_EVAL to extract CSV fields and set _time for each event: # inputs.conf [monitor:///path/to/AUTO_*.CSV] index = main sourcetype = sensor_csv # props.conf [sensor_csv] # disable default timestamp extraction and suppress errors DATETIME_CONFIG = CURRENT # enable indexed extractions for CSV files INDEXED_EXTRACTIONS = CSV # use header line 12 for field names: #   "Time","U1-2[]","Event" # these will be "cleaned" by Splunk: #   Time #   U1_2 #   Event HEADER_FIELD_LINE_NUMBER = 12 # execute a transform to extract the _time value TRANSFORMS-sensor_csv_time = sensor_csv_time # transforms.conf [sensor_csv_time] INGEST_EVAL = _time:=strptime(replace(source, ".*(AUTO_\\d{6}_\\d{6}_\\d{3}).*", "\\1"), "AUTO_%y%m%d_%H%M%S_%N")+tonumber(coalesce(replace(_raw, "^(?!\")([^,]+),.*", "\\1"), 0)) ``` search ``` index=main sourcetype=sensor_csv | table _time source Time U1_2 Event _time source Time U1_2 Event 2023-11-26 01:20:52.500 AUTO_231126_012051_500_0329.CSV +1.000000000E+00 +3.16000E+00 0 2023-11-26 01:20:52.400 AUTO_231126_012051_500_0329.CSV +9.000000000E-01 +3.16500E+00 0 2023-11-26 01:20:52.300 AUTO_231126_012051_500_0329.CSV +8.000000000E-01 +3.19400E+00 0 2023-11-26 01:20:52.200 AUTO_231126_012051_500_0329.CSV +7.000000000E-01 +3.18400E+00 0 2023-11-26 01:20:52.100 AUTO_231126_012051_500_0329.CSV +6.000000000E-01 +3.17300E+00 0 2023-11-26 01:20:52.000 AUTO_231126_012051_500_0329.CSV +5.000000000E-01 +3.17300E+00 0 2023-11-26 01:20:51.900 AUTO_231126_012051_500_0329.CSV +4.000000000E-01 +3.19100E+00 0 2023-11-26 01:20:51.800 AUTO_231126_012051_500_0329.CSV +3.000000000E-01 +3.60100E+00 0 2023-11-26 01:20:51.700 AUTO_231126_012051_500_0329.CSV +2.000000000E-01 +7.93600E+00 0 2023-11-26 01:20:51.600 AUTO_231126_012051_500_0329.CSV +1.000000000E-01 +1.45180E+01 0 2023-11-26 01:20:51.500 AUTO_231126_012051_500_0329.CSV +0.000000000E+00 +2.90500E+00 0
@PickleRick @ITWhisperer   Thanks for the responses,  I tried both and they both give me the same result, but still not exactly what I had in mind in my head. Here's the result:   What I'm try... See more...
@PickleRick @ITWhisperer   Thanks for the responses,  I tried both and they both give me the same result, but still not exactly what I had in mind in my head. Here's the result:   What I'm trying to do is more along the lines of being able to get a count of products that sold at specific price points from transactions that may have multiple items purchased. Maybe what I'm trying to do isn't really possible or the best approach to the problem,  but the following table shows kinda what I'm trying to accomplish. products product_prices count(products) product_100 100 2 product_200 200 1 product_300 300 1 product_400 400 1
Yep. Check the output of splunk btool list server clustering | grep factor  
Hi and thanks for the reply. And what are your site RF/SF > can you be more spécific please ? In the server.conf in my CM ? (I will check that when back to work tomorow. For the sites details : 2 ... See more...
Hi and thanks for the reply. And what are your site RF/SF > can you be more spécific please ? In the server.conf in my CM ? (I will check that when back to work tomorow. For the sites details : 2 site with 18 indexers on each side. So 9 on one site and 8 + 1 decommissioned on the other site.  I get back to you tomorow morning. Regards,
Great. Could you please share what you have found. I would like to see it. Thanks
Hi @Hemant93, Masking sensitive data is typically performed on the Heavy Forwarder / Indexer before it goes into the Splunk index. We can do that job with a props.conf file. [maskpii] SEDCMD-pii-do... See more...
Hi @Hemant93, Masking sensitive data is typically performed on the Heavy Forwarder / Indexer before it goes into the Splunk index. We can do that job with a props.conf file. [maskpii] SEDCMD-pii-dob = s/@dob=['"][^'"]+['"]/@dob='***MASKED***'/g SEDCMD-pii-ssn = s/@ssn=['"][^'"]+['"]/@ssn='***MASKED***'/g This file uses the maskpii sourcetype, and tells Splunk to change any dob or snn value to "***MASKED***". Put that props file on either the heavy forwarder or indexer (wherever your data is sent first) and restart Splunk. Using that file I ingested your sample data and here's the result:    
And what are your site RF/SF settings and how many indexers do you have in each site?
Thanks. @bowesmana solution seems to work. But it seems, processing it externally is more efficient. many thanks to all.
thank you for your answer, @jenniandthebets    I did not want the playbook to run at at all but I see that there is no way out of it. if that is the case, why is there an option to choose tags whe... See more...
thank you for your answer, @jenniandthebets    I did not want the playbook to run at at all but I see that there is no way out of it. if that is the case, why is there an option to choose tags when creating a playbook? 2. as for the datapath, I believe the filter should be [in] and not "==". anyway it did not work well for me all the time, I found that if I first create a simple code block that only gets the "container:tags" as input at outputs it as a variable, only then the filter works well. am I the only one it happens to?  
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. A Rolling restart and CM  restart (splunkd) had no effect. Got 3 SF... See more...
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. A Rolling restart and CM  restart (splunkd) had no effect. Got 3 SF tasks in pending with the same message : Missing enough suitable candidates to create a replicated copy in order to meet replication policy. Missing={ site2:1 } I have tried Resync and roll it with no success.  In the details of the pending task, I can see that de bucket is only on one indexer, and not searchable on other indexers of the cluster. My SF = 2 and RF = 2. Id like to be clean before decomissionning the next indexer.  Any advice or help will be hightly appreciate in order to retrieve my SF/RP (it is a production issue) Thanks by advance
Multivalued fields are separate entities which means Splunk doesn't keep any "connection" between values in those fields. For Splunk each field is just a single "multivalued value" (yes, I know it so... See more...
Multivalued fields are separate entities which means Splunk doesn't keep any "connection" between values in those fields. For Splunk each field is just a single "multivalued value" (yes, I know it sounds bad ;-)). So you have to manually combine those values. One solution @ITWhisperer already showed but for me it's a bit "brute force". My idea of a more "splunky" approach to splitting those products and product_prices would be to do | eval zipped=mvzip(products,product_prices,":") | mvexpand zipped | eval zipped=split(zipped,":") | eval products=mvindex(zipped,0), product_prices=mvindex(zipped,1) Then you can do your stats
| eval row=mvrange(0,mvcount(products)) | mvexpand row | eval products=mvindex(products, row) | eval product_prices=mvindex(product_prices, row) | stats count(products) by products,product_prices
The cloud is managed by clever automation on Splunk's side so the apps you upload to Cloud land on indexers as well. So the proper way to define index-time props and transforms is to just make an app... See more...
The cloud is managed by clever automation on Splunk's side so the apps you upload to Cloud land on indexers as well. So the proper way to define index-time props and transforms is to just make an app with those settings and install it on your Cloud instance.
This is a very old thread. You're unlikely to get an answer from people posting here. It's best if you create a new thread, possibly linking to this one for reference.
So, I've been away from Splunk for several years now, and now re-visiting it.  I've got a scenario where I would like to track certain metrics from imported data.  I created a simple CSV with just a ... See more...
So, I've been away from Splunk for several years now, and now re-visiting it.  I've got a scenario where I would like to track certain metrics from imported data.  I created a simple CSV with just a few entries to demonstrate the issues I'm having. Below is the source data I created: customer_id Time customer_fname customer_lname products product_prices 111 12/1/2023 John Doe product_100,product_200 100,200 222 12/11/2023 Suzy Que product_100 100 333 12/15/2023 Jack Jones product_300 300 111 12/18/2023 John Doe product_400 400   In this scenario this is just examples of customers and the items they purchased and the price paid.   After uploading the file and displaying the data in a table it looks as expected: source="test_sales.csv" | table customer_id,customer_fname,customer_lname,products,product_prices Upon using makemv to convert "products" and "product_prices" to multi-value fields, again the results are as expected and the product and price align since they were input into the source CSV in the proper order: source="test_sales.csv" | makemv delim="," products | makemv delim="," product_prices | table customer_id,customer_fname,customer_lname,products,product_prices   Here is where my issue is,  Is there a way to tie the product for a purchase transaction  in the multi-value "products" column to it's corresponding price in the multi-value "product_prices" column? Everything seems to work except when I try to so something like listing the products by price for the multi-value fields like this: source="test_sales.csv" | makemv delim="," products | makemv delim="," product_prices | stats count(products) by products,product_prices In the above results you can see that I'm getting results that are not exactly what I would want.  Ex.  it shows: 3 instances of product_100 at a price of 100, should only be 2 instances 2 instances of product_100 at a price of 200, should be 0 instances of this combination 2 instances of product_200 at a price of 100, should be 0 instances of this combination 2 instances of product_200 at a price of 200, should only be 1 instance   I'm likely approaching this incorrectly or using the wrong tool for the task,  any help to get me on the right track would be appreciated.   Thanks
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. Rolling restart and restart CM have no effect. Got 3 SF tasks in pe... See more...
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. Rolling restart and restart CM have no effect. Got 3 SF tasks in pending with the same message as mentionned in this post.  Id like to be clean before decomissionning another indexer.  Any help will be appreciate in order to retrieve my SF/RP Thanks
Hi Team, I have two dashboards designed for specific sets of locations. My plan is to consolidate them into a single dashboard, utilizing filters to distinguish between different locations. For ins... See more...
Hi Team, I have two dashboards designed for specific sets of locations. My plan is to consolidate them into a single dashboard, utilizing filters to distinguish between different locations. For instance, locations aaa, bbb, ccc, and ddd pertain to the inContact application, while locations eee, fff, ggg, and hhh belong to the Genesys application. Location value is in marketing area field marketing-area": "aaa"  I need to enable a filter for both inContact and Genesys. Clicking on inContact should display relevant locations, and similarly, clicking on Genesys should show the corresponding locations. Please let me know if any input required.
Hi all,   I am coming from Splunk on-prem so this is a bit confusing to me. I have looked at architectures regarding Splunk Cloud and can't understand how data configs are done when using Splunk Cl... See more...
Hi all,   I am coming from Splunk on-prem so this is a bit confusing to me. I have looked at architectures regarding Splunk Cloud and can't understand how data configs are done when using Splunk Cloud. For example, let's say:   - You have a UF on a machine that forwards data to Splunk Indexers (cloud), you are to make a custom sourcetype for this specific piece of data. Where would you define the parsing rules for this if you don't manage the Indexers. Furthermore if the data can be on-boarded with a TA, how would you install this TA onto the indexers to assist with onboarding (assuming no need for HF)     Any help would be appreciated, thanks!