All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You know there is a field alias feature in Splunk, too.  That is a more appropriate solution if you do really want to search by a different name.  An extra lookup is clunky and also a compute cost. ... See more...
You know there is a field alias feature in Splunk, too.  That is a more appropriate solution if you do really want to search by a different name.  An extra lookup is clunky and also a compute cost. Go to Settings -> Fields -> Field aliases.  
The problem I am having is the raw data looks like this:  "[8/8/24 13:37:46:622 EDT] 00007e14 HOSTEDWIRES** I ************" What I am trying to do is do a search on the raw data find the "W" and "E"... See more...
The problem I am having is the raw data looks like this:  "[8/8/24 13:37:46:622 EDT] 00007e14 HOSTEDWIRES** I ************" What I am trying to do is do a search on the raw data find the "W" and "E" The problem I am having is the raw data looks like this:  "[8/8/24 13:37:46:622 EDT] 00007e14 HOSTEDWIRES** W ************" or The problem I am having is the raw data looks like this:  "[8/8/24 13:37:46:622 EDT] 00007e14 HOSTEDWIRES** E ************" A basic search I am using: (Sorry, I had to obfuscate some of the SPL. index="index" host IN ("Server 1","Server 2","Backup Server 1","Backup Server 2") source=* sourcetype=###_was_systemout_log | ("W" or "E") In WebSphere SystemOut logs, the warning or error indicator comes after the timestamp and application type.  So, when I search for just ("W" or "E") it will pull everything that has "W" "E" in the text.  How do I isolate it to search for that after the application type, and before the transaction raw data?  I don't get to play with Splunk that much, so this is beyond my skill level.  I am still learning.  Thanks again for the help.
Hi If this is your own custom app, then just update splunklib from git as you have originally installed it. There are instructions on dev.splunk.com how to use splunklib on your own apps. If it’s ma... See more...
Hi If this is your own custom app, then just update splunklib from git as you have originally installed it. There are instructions on dev.splunk.com how to use splunklib on your own apps. If it’s made by someone else, then ask that owner will update it or just create your own version and update it as described on previous item. r. Ismo
Hello @yuanliu  I tested and it worked fine for the sample. I accepted your suggestion as solution. Thank you for your help. 1) Max 50k rows When I tested with the real data, I found out that ... See more...
Hello @yuanliu  I tested and it worked fine for the sample. I accepted your suggestion as solution. Thank you for your help. 1) Max 50k rows When I tested with the real data, I found out that the sub search CSV file is limited to 50k rows.  I need the CSV file as my baseline for left join, so if the file has 100k rows, then the expected result after left join is 100k rows (with additional column from index). a) What do you suggest to fix this issue?    (modifying limits.conf is not allowed) b) Will splitting the CSV work? 2) Join command Do you think join command can work on my case? I tested it your solution using join in real data, but it always gave me the result as inner join, instead of left join,  although I already specified join type=left In the solution you provided index will be treated as left data because it's specified first How do I make the CSV as left data?      I appreciate your help. Thanks
I am trying to create a dashboard that uses a search that has a 6 digit number but need the a decimal on the last 2 numbers.  This is the result I get. index=net Model=ERT-SCM EM_ID=Redacted | st... See more...
I am trying to create a dashboard that uses a search that has a 6 digit number but need the a decimal on the last 2 numbers.  This is the result I get. index=net Model=ERT-SCM EM_ID=Redacted | stats count by Consumption 199486 I would like it shown like this. 1994.86 Kwh I have tried this but only gives me the last 2 numbers with a decimal | rex mode=sed field=Consumption "s/(\\d{4})/./g"
Login to any of those servers and use  splunk btool alert_actions list --debug In this way you see from which file each setting is coming. I’m not sure, but there could be some settings in this co... See more...
Login to any of those servers and use  splunk btool alert_actions list --debug In this way you see from which file each setting is coming. I’m not sure, but there could be some settings in this config which are working only from …/system/local or at least that was case on older versions (6.x and 7.x)? r. Ismo
Hi @., You need Controller version 24.6 or higher. You are on an older Controller version. Can you try upgrading to 24.6?
The most important thing is to determine which index (not index*er*) holds the WebSphere logs.  That will narrow the scope of your search. Once you have that information, you can begin your search. ... See more...
The most important thing is to determine which index (not index*er*) holds the WebSphere logs.  That will narrow the scope of your search. Once you have that information, you can begin your search.  Start with " W " and " E ".  Those aren't great strings for searching, but they're a start.  As you receive results, use what you find to add to the search string until have have what you want. index=websphere (" W " OR " E ")
I tried to use "customized in source" option in Splunk Cloud (9.1.2312.203) Dashboard Studio to create a Single Value which background color is controlled by search result. However the code does no... See more...
I tried to use "customized in source" option in Splunk Cloud (9.1.2312.203) Dashboard Studio to create a Single Value which background color is controlled by search result. However the code does not work.  The same code below is tested with statics option which works well. Below is Dashboard JSON { "visualizations": { "viz_74mllhEE": { "type": "splunk.singlevalue", "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)", "sparklineValues": "> primary | seriesByName('background_color')", "sparklineDisplay": "off", "trendDisplay": "off", "majorColor": "#0877a6", "backgroundColor": "> primary | seriesByName('background_color')" }, "dataSources": { "primary": "ds_00saKHxb" } } }, "dataSources": { "ds_00saKHxb": { "type": "ds.search", "options": { "query": "| makeresults \n| eval background_color=\"#53a051\"\n" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_74mllhEE", "type": "block", "position": { "x": 0, "y": 0, "w": 250, "h": 250 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "ztli_test" }
There are two separate challenges one about transform the presentation, the other getting the header into the desired order.  Here is my crack at it.  To begin, you need to extract TransID and the ma... See more...
There are two separate challenges one about transform the presentation, the other getting the header into the desired order.  Here is my crack at it.  To begin, you need to extract TransID and the marker "Start time" or "End time".  How you do it is up to you because the data illustrated doesn't seem to be the raw format, at least not the timestamp.  I will take the illustrated format literally.   | rex "(?<time>\S+) (?<TransID>\S+) \"(?<marker>[^\"]+)" | streamstats count by marker | eval marker = marker . count | xyseries TransID marker time | transpose 0 header_field=TransID | eval order = if(column LIKE "Start%", 1, 2) | eval sequence = replace(column, ".+Time", "") | sort sequence order | fields - sequence order | transpose 0 header_field=column column_name=TransID   So, the bigger challenge is to get desired order of headers.  I have to expense two tranposes.  If you do not need that strict order, things are much simpler.  Output from your mock data is TransID Start Time1 End Time1 Start Time2 End Time2 Start Time3 End Time3 0123 8:00 8:01 8:30 8:31 9:00 9:01 Here is an emulation for you to play with and compare with real data   | makeresults format=csv data="_raw 8:00 0123 \"Start Time\" 8:01 0123 \"End Time\" 8:30 0123 \"Start Time\" 8:31 0123 \"End Time\" 9:00 0123 \"Start Time\" 9:01 0123 \"End Time\"" ``` the above emulates index=<app> "Start Time" OR "End Time" ```    
Thank you @isoutamo for your reply. I will look into the tool.  
Hi have you try this https://splunkbase.splunk.com/app/3757 ? Of course if the issue is that Azure has this internal delays there is nothing that could fixed by integrations. If this is the issue, ... See more...
Hi have you try this https://splunkbase.splunk.com/app/3757 ? Of course if the issue is that Azure has this internal delays there is nothing that could fixed by integrations. If this is the issue, then you should contact to Azure support and ask from them if there are any workarounds for it. r. Ismo
Here's what as in my Props.conf. I cannot share logs.   [SUMS] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=(At\s[0-2][0-9]:[0-6][0-9]:[0-6][0-9]\s-\d{4}\s-)  
Hi You could try this https://github.com/ryanadler/downloadSplunk to generate suitable download link. At least it knows 7.0.x versions. r. Ismo
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut E... See more...
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut Errors "E"? Thanks.   For your reference, here is a link to IBM's WebSphere log interpretation: ibm.com/docs/en/was/8.5.5?topic=SSEQTP_8.5.5/…
Hi When you are looking source, it said that this event is from OS auditd not from Splunk’s internal logs. This is the reason why it is in your Linux index. r. Ismo
And if you haven’t backups (you really should), just add role admin to your admin user into authorize.conf file with any text editor. See authorize.conf specs how it should do.
I created a support ticket, and they confirmed that this is a bug that will be fixed in the next release of SSE. However, they could not provide a date for the update and recommended that I downgrade... See more...
I created a support ticket, and they confirmed that this is a bug that will be fixed in the next release of SSE. However, they could not provide a date for the update and recommended that I downgrade back to 3.7.1. I did so and that worked. I've asked that they update the "Known Issues" list with this bug info.
Try something like this index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 | bin _time as day span=1d | stats count min(eval(if(EventCode=4624,_time,null()))) as first_logon max... See more...
Try something like this index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 | bin _time as day span=1d | stats count min(eval(if(EventCode=4624,_time,null()))) as first_logon max(eval(if(EventCode=4634,_time,null)))) as last_logout by day user
Hi I’m not sure if there is any official calculations available? You could do some estimates based on knowledge of your replication factor and amount of ingestion and searches. Splunk will replicate... See more...
Hi I’m not sure if there is any official calculations available? You could do some estimates based on knowledge of your replication factor and amount of ingestion and searches. Splunk will replicate all buckets between sites as soon as events are written on primary bucket. Actually splunk inform sending HF / UF (if you have indexAck configured) after replication has done, not before. Also searches are using bandwidth based on your queries. Bad queries use more bandwidth and better less. r. Ismo