All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately we as community users cannot do anything for this. Time by time it could take even day or two to get this email.
I registered for the 14-day Free Trial of Splunk Cloud Platform. I registered my email address and verified it. I expected to receive an email entitled "Welcome to Splunk Cloud Platform" with c... See more...
I registered for the 14-day Free Trial of Splunk Cloud Platform. I registered my email address and verified it. I expected to receive an email entitled "Welcome to Splunk Cloud Platform" with corresponding links from which to access and use a trial version of Splunk Cloud. That email never arrived after several hours. No evidence of it exists either in my "Spam" or "Trash" folders of my inbox. Please look into this and advise. Thanks! -Rolland
Hello, I have a report scheduled every week and the results are exported to pdf's. Is there an option to NOT email if no results are found because sometimes these PDF's have nothing in them.   Tha... See more...
Hello, I have a report scheduled every week and the results are exported to pdf's. Is there an option to NOT email if no results are found because sometimes these PDF's have nothing in them.   Thanks
I have the Splunk Add-on for Google Cloud Platform set up on an IDM server.  I am currently on version 4.4 and have inputs set up already from months ago however, I am trying to send more data to spl... See more...
I have the Splunk Add-on for Google Cloud Platform set up on an IDM server.  I am currently on version 4.4 and have inputs set up already from months ago however, I am trying to send more data to splunk but for some reason the inputs page does not load anymore.  The connection seems to be fine as I am still receiving expected data from my previous inputs but now when i try to add another input I get the following error: Failed to Load Inputs Page This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page. Details AxiosError: Request failed with status code 500
I would use list() instead of values() to prevent removal of duplicates and wrap the product in exact() to prevent rounding errors: | makeresults format=csv data="value_a 0.44 0.25 0.67 0.44" | sta... See more...
I would use list() instead of values() to prevent removal of duplicates and wrap the product in exact() to prevent rounding errors: | makeresults format=csv data="value_a 0.44 0.25 0.67 0.44" | stats list(value_a) as value_a | eval "product(value_a)"=1 | foreach value_a mode=multivalue [ eval "product(value_a)"=exact('product(value_a)' * <<ITEM>>) ] | table "product(value_a)"  => product(value_a) 0.032428
Hi @madhav_dholakia, In Simple XML, you can generate categorical choropleth maps with color-coded city boundaries: In Dashboard Studio, however, choropleth maps are limited to numerical distrib... See more...
Hi @madhav_dholakia, In Simple XML, you can generate categorical choropleth maps with color-coded city boundaries: In Dashboard Studio, however, choropleth maps are limited to numerical distributions, e.g. OpenIssues by city, and the geospatial lookup geometry isn't always interpreted correctly: In both examples, I've used mapping data published by the Office for National Statistics at https://geoportal.statistics.gov.uk. Search for BDY_TCITY DEC_2015 to download the corresponding KML file. As a compromise, you can use a marker map to display color-coded markers at city centers by latitude and longitude: Here's the source: { "visualizations": { "viz_KxsdmDQb": { "type": "splunk.map", "options": { "center": [ 52.560559999999924, -1.4702799999984109 ], "zoom": 6, "layers": [ { "type": "marker", "dataColors": "> primary | seriesByName('Status') | matchValue(colorMatchConfig)", "choroplethOpacity": 0.75, "additionalTooltipFields": [ "Status", "StoreID", "City", "OpenIssues" ], "latitude": "> primary | seriesByName('lat')", "longitude": "> primary | seriesByName('lon')" } ] }, "context": { "colorMatchConfig": [ { "match": "Dormant/Green", "value": "#118832" }, { "match": "Warning/Amber", "value": "#cba700" }, { "match": "Critical/Red", "value": "#d41f1f" } ] }, "dataSources": { "primary": "ds_CtvaIPJ3" } } }, "dataSources": { "ds_CtvaIPJ3": { "type": "ds.search", "options": { "query": "| makeresults format=csv data=\"StoreID,City,OpenIssues,Status,lat,lon\r\nStore 1,London,3,Critical/Red,51.507222,-0.1275\r\nStore 2,York,2,Warning/Amber,53.96,-1.08\r\nStore 3,Bristol,0,Dormant/Green,51.453611,-2.5975\r\nStore 4,Liverpool,1,Warning/Amber,53.407222,-2.991667\" \r\n| table StoreID City OpenIssues Status lat lon", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Choropleth map search" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": {} } } } }, "inputs": {}, "layout": { "type": "absolute", "options": { "width": 918, "height": 500, "display": "auto" }, "structure": [ { "item": "viz_KxsdmDQb", "type": "block", "position": { "x": 0, "y": 0, "w": 918, "h": 500 } } ], "globalInputs": [] }, "description": "", "title": "eaw_store_status_ds" } As a static workaround, the Choropleth SVG visualization allows you to upload a custom image, e.g. a stylized map of England and Wales, and define custom SVG boundaries and categorical colors. The Dashboard Studio documentation includes a basic tutorial at https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/mapsChorSVG.
N/A
So UF recognize the change of time when it writes this message into log, but its scheduler didn’t understand it correctly to run next round at correct time. Definitely time to create splunk support case.
N.A
Thanks I found this solution insightful and helpful for a similar scenario I am working on.
Hi All, is there any steps to follow to ingest transactional data from TIBCO database to Splunk without any add-on's 
How quickly that sw update node’s time to correct after hibernation? Basically after that UF’s cron schedule should work as expected. If not then I propose that you should create a support case to Spl... See more...
How quickly that sw update node’s time to correct after hibernation? Basically after that UF’s cron schedule should work as expected. If not then I propose that you should create a support case to Splunk.
@isoutamo, No, I have not tried with spath, Could you please guide me with that.   I tried with the below, its showing events, but not getting the transaction level information index="<indexname>... See more...
@isoutamo, No, I have not tried with spath, Could you please guide me with that.   I tried with the below, its showing events, but not getting the transaction level information index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | spath | table _time JobId TransactionId _raw | search JobId=* | append [ search index="<indexname>" source = "user1" OR source = "user2" | spath | search JobId=* | table _time JobId TransactionId _raw ] | stats dc(TransactionId) as UniqueTransactionCount values(TransactionId) as UniqueTransactions by JobId
This from the documentation Best practices for creating chain searches Use these best practices to make sure that chain searches work as expected. Use a transforming base search A base search sho... See more...
This from the documentation Best practices for creating chain searches Use these best practices to make sure that chain searches work as expected. Use a transforming base search A base search should be a transforming search that returns results formatted as a statistics table. For example, searches using the following commands are transforming searches: stats, chart, timechart, and geostats, among others. For more information on transforming commands, see About transforming commands in the Search Manual. https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/dsChain#Best_practices_for_creating_chain_searches  
N.A  
Have you try a spath command as you have json data in use?
@bowesmana, I have tried with that, but not getting any results. Actually I am trying to match the jobid from the below message. And using this jobId I have get other records which are all matching ... See more...
@bowesmana, I have tried with that, but not getting any results. Actually I am trying to match the jobid from the below message. And using this jobId I have get other records which are all matching with this jobid  
Here is one old post how to write "SQL Joins" in Splunk SPL https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948
Those seems to be same as on prem 500,000 events and 30s (I think that this was earlier 60s, but seems to be same in on-prem too). See https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/Sav... See more...
Those seems to be same as on prem 500,000 events and 30s (I think that this was earlier 60s, but seems to be same in on-prem too). See https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/Savedsearches#Use_a_transforming_base_search Based on those you have exceeded both limits. I suppose that event limit is much more important and this could be the reason why it didn't work as expected. 
Thank you for your response. I'll take an example to explain it better let's say we have two events entries in splunk as below 1. ****1111222*abcabcabac*ERROR*<time> Logging server excep... See more...
Thank you for your response. I'll take an example to explain it better let's say we have two events entries in splunk as below 1. ****1111222*abcabcabac*ERROR*<time> Logging server exception... Error code: 6039 Error description: An internal message proxy HTTP error occurred when the request was being processed Parameter: 504 Gateway Time-out 2. ****1111222*xyzxyz*0078*ERROR*<time> ExecuteFactoryJob: Caught soap exception. Java factory ID: 3910059732_3_0_223344 Request failed after tries = 1 So now these two are different events entries in splunk which can be fetched by query1 and query2 separately. Now first I checked for 504 Gateway timeout with error code 6039 and took the thread_id (111122 in above example) by using rex and now using this thread_id I look for 2nd event entries as shown in above example and if it is found then return 1 single event back as result containing both events in raw message. It's like inner join. if either event is not present then it should not be returned. I tried using join as well but it didn't work. Tried your query it is giving result some of which contains a single event and some contains both events grouped (which is expected). All the result events individually should contain raw_message of both examples For above example result should be 1 single event like below: ****1111222*abcabcabac*ERROR*<time> Logging server exception... Error code: 6039 Error description: An internal message proxy HTTP error occurred when the request was being processed Parameter: 504 Gateway Time-out ****1111222*xyzxyz*0078*ERROR*<time> ExecuteFactoryJob: Caught soap exception. Java factory ID: 3910059732_3_0_223344 Request failed after tries = 1 @isoutamo  @ITWhisperer