All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The most important thing is to determine which index (not index*er*) holds the WebSphere logs.  That will narrow the scope of your search. Once you have that information, you can begin your search. ... See more...
The most important thing is to determine which index (not index*er*) holds the WebSphere logs.  That will narrow the scope of your search. Once you have that information, you can begin your search.  Start with " W " and " E ".  Those aren't great strings for searching, but they're a start.  As you receive results, use what you find to add to the search string until have have what you want. index=websphere (" W " OR " E ")
I tried to use "customized in source" option in Splunk Cloud (9.1.2312.203) Dashboard Studio to create a Single Value which background color is controlled by search result. However the code does no... See more...
I tried to use "customized in source" option in Splunk Cloud (9.1.2312.203) Dashboard Studio to create a Single Value which background color is controlled by search result. However the code does not work.  The same code below is tested with statics option which works well. Below is Dashboard JSON { "visualizations": { "viz_74mllhEE": { "type": "splunk.singlevalue", "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)", "sparklineValues": "> primary | seriesByName('background_color')", "sparklineDisplay": "off", "trendDisplay": "off", "majorColor": "#0877a6", "backgroundColor": "> primary | seriesByName('background_color')" }, "dataSources": { "primary": "ds_00saKHxb" } } }, "dataSources": { "ds_00saKHxb": { "type": "ds.search", "options": { "query": "| makeresults \n| eval background_color=\"#53a051\"\n" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_74mllhEE", "type": "block", "position": { "x": 0, "y": 0, "w": 250, "h": 250 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "ztli_test" }
There are two separate challenges one about transform the presentation, the other getting the header into the desired order.  Here is my crack at it.  To begin, you need to extract TransID and the ma... See more...
There are two separate challenges one about transform the presentation, the other getting the header into the desired order.  Here is my crack at it.  To begin, you need to extract TransID and the marker "Start time" or "End time".  How you do it is up to you because the data illustrated doesn't seem to be the raw format, at least not the timestamp.  I will take the illustrated format literally.   | rex "(?<time>\S+) (?<TransID>\S+) \"(?<marker>[^\"]+)" | streamstats count by marker | eval marker = marker . count | xyseries TransID marker time | transpose 0 header_field=TransID | eval order = if(column LIKE "Start%", 1, 2) | eval sequence = replace(column, ".+Time", "") | sort sequence order | fields - sequence order | transpose 0 header_field=column column_name=TransID   So, the bigger challenge is to get desired order of headers.  I have to expense two tranposes.  If you do not need that strict order, things are much simpler.  Output from your mock data is TransID Start Time1 End Time1 Start Time2 End Time2 Start Time3 End Time3 0123 8:00 8:01 8:30 8:31 9:00 9:01 Here is an emulation for you to play with and compare with real data   | makeresults format=csv data="_raw 8:00 0123 \"Start Time\" 8:01 0123 \"End Time\" 8:30 0123 \"Start Time\" 8:31 0123 \"End Time\" 9:00 0123 \"Start Time\" 9:01 0123 \"End Time\"" ``` the above emulates index=<app> "Start Time" OR "End Time" ```    
Thank you @isoutamo for your reply. I will look into the tool.  
Hi have you try this https://splunkbase.splunk.com/app/3757 ? Of course if the issue is that Azure has this internal delays there is nothing that could fixed by integrations. If this is the issue, ... See more...
Hi have you try this https://splunkbase.splunk.com/app/3757 ? Of course if the issue is that Azure has this internal delays there is nothing that could fixed by integrations. If this is the issue, then you should contact to Azure support and ask from them if there are any workarounds for it. r. Ismo
Here's what as in my Props.conf. I cannot share logs.   [SUMS] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=(At\s[0-2][0-9]:[0-6][0-9]:[0-6][0-9]\s-\d{4}\s-)  
Hi You could try this https://github.com/ryanadler/downloadSplunk to generate suitable download link. At least it knows 7.0.x versions. r. Ismo
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut E... See more...
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut Errors "E"? Thanks.   For your reference, here is a link to IBM's WebSphere log interpretation: ibm.com/docs/en/was/8.5.5?topic=SSEQTP_8.5.5/…
Hi When you are looking source, it said that this event is from OS auditd not from Splunk’s internal logs. This is the reason why it is in your Linux index. r. Ismo
And if you haven’t backups (you really should), just add role admin to your admin user into authorize.conf file with any text editor. See authorize.conf specs how it should do.
I created a support ticket, and they confirmed that this is a bug that will be fixed in the next release of SSE. However, they could not provide a date for the update and recommended that I downgrade... See more...
I created a support ticket, and they confirmed that this is a bug that will be fixed in the next release of SSE. However, they could not provide a date for the update and recommended that I downgrade back to 3.7.1. I did so and that worked. I've asked that they update the "Known Issues" list with this bug info.
Try something like this index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 | bin _time as day span=1d | stats count min(eval(if(EventCode=4624,_time,null()))) as first_logon max... See more...
Try something like this index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 | bin _time as day span=1d | stats count min(eval(if(EventCode=4624,_time,null()))) as first_logon max(eval(if(EventCode=4634,_time,null)))) as last_logout by day user
Hi I’m not sure if there is any official calculations available? You could do some estimates based on knowledge of your replication factor and amount of ingestion and searches. Splunk will replicate... See more...
Hi I’m not sure if there is any official calculations available? You could do some estimates based on knowledge of your replication factor and amount of ingestion and searches. Splunk will replicate all buckets between sites as soon as events are written on primary bucket. Actually splunk inform sending HF / UF (if you have indexAck configured) after replication has done, not before. Also searches are using bandwidth based on your queries. Bad queries use more bandwidth and better less. r. Ismo
Hello, I am struggling on figuring out how this request can be achieved.  I need to report on events from an API call in Splunk, However, the API call requires variables from another API call.  I ha... See more...
Hello, I am struggling on figuring out how this request can be achieved.  I need to report on events from an API call in Splunk, However, the API call requires variables from another API call.  I have been testing with the Add-On Builder and can make the initial request.  I'm seeing the resulting events in Splunk Search, but I can't figure out how to create a secondary API call that could use the fields as variables in the secondary args or parameters fields. I was trying to use the API module, because I'm not fluent at all with scripting. Thanks for any help on this, it is greatly appreciated, Tom
Hi some TAs will support some kind of HA e.g. DB Connect, but I think that most didn’t. With DB Connect you could use SHC configuration for managing HA. I’m not sure how well this is currently work... See more...
Hi some TAs will support some kind of HA e.g. DB Connect, but I think that most didn’t. With DB Connect you could use SHC configuration for managing HA. I’m not sure how well this is currently working in general TAs? This needs some kind of mechanism to use distributed checkpoint status e.g. kvstore. r. Ismo
It’s good to known that all those nodes are independent on for buckets. There could be situations where primary bucket is already e.g. removed and there are still those secondary buckets on another si... See more...
It’s good to known that all those nodes are independent on for buckets. There could be situations where primary bucket is already e.g. removed and there are still those secondary buckets on another sites and/or another nodes on primary sites.
In Current Splunk deployment  we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the function... See more...
In Current Splunk deployment  we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the functions.   so  is there High Availability option available for Heavy forwarder OR for DB connect APP ?
Usually those underscore indexes are restricted only for admin user access. As @PickleRick said those are reserved for Splunk’s own usage, not for regular data. If you need to use those as a regular u... See more...
Usually those underscore indexes are restricted only for admin user access. As @PickleRick said those are reserved for Splunk’s own usage, not for regular data. If you need to use those as a regular user, you must separately grant access to those.
What does your job inspection say about the time period of your search?
Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in... See more...
Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in the cluster subject to your replication, search, and forwarding settings.