All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks! Albeit abit slow and unresponsive i got some results NB action event_id mx_status operation portfolio_entity portfolio_name sky_id trade_type tradebooking_sgp 0   0 LIV... See more...
Thanks! Albeit abit slow and unresponsive i got some results NB action event_id mx_status operation portfolio_entity portfolio_name sky_id trade_type tradebooking_sgp 0   0 LIVE sgp usa usaeod ABC Korea ABC Panema ... ... A USD AOU ... ... ... 12345678 VanillaSwap ... ... YYYY/MM/DD HH:MM:SS ...                                                            
Aside from the limits for base search results, using a base search to hold large numbers will often NOT improve performance because you are taking lots of results from perhaps multiple indexers, wher... See more...
Aside from the limits for base search results, using a base search to hold large numbers will often NOT improve performance because you are taking lots of results from perhaps multiple indexers, where you are benefiting from parallelism, and sticking them on the search head, where you only have the CPU of the single search head to then process all those results - also competing for CPU with other users  of that search head. Note that the comments about doing this in the base search ... | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time EId followed by a post process search doing | search EId="5eb2aee9" | stats count as Total, count(failures) as failures, first(p95RespTime) as p95RespTime by _time ... is not quite right, as you don't need another stats, because you are just getting the information calculated in the base stats, but filtering out only the EId you want. However, a point to note about stats + stats is that the second stats would not do stats COUNT, but stats sum(Total), i.e. if you wanted to get the total for EId without regard to _time, you could do something like this... | search EId="5eb2aee9" | stats sum(Total) as Total, sum(failures) as failures, min(p95RespTime) as min_p95RespTime max(p95RespTime) as max_p95RespTime avg(p95RespTime) as avg_p95RespTime ...  
Can you clarify what you did to get the "search time enrichment". Did you create an automatic lookup or are you using a lookup to enrich the data in your search SPL or are you doing something else? ... See more...
Can you clarify what you did to get the "search time enrichment". Did you create an automatic lookup or are you using a lookup to enrich the data in your search SPL or are you doing something else? If you change your lookup, then the lookup results will change, so I am not sure what you mean by "real time enrichment". The principle of a CSV lookup is to give you data from the lookup file based on a field or fields in an event. That principle would give you "search time" AND "real time" enrichment, as they would be one and the same thing.  
In your screenshot, the field jobId had a lower case J, whereas you're using JobId - field names are case sensitive. Also when you use simple spath to extract all fields, they will have the JSON hier... See more...
In your screenshot, the field jobId had a lower case J, whereas you're using JobId - field names are case sensitive. Also when you use simple spath to extract all fields, they will have the JSON hierarchy in their field names, i.e. the jobId is the field Properties.jobId, not jobId Also, this is all achievable without using append, so try the subsearch to do the constraints for the outer
If you run the search that gives you that output in Verbose mode, you will see the fields that are automatically extracted. If jobId is a field that is automatically extracted, then you should write... See more...
If you run the search that gives you that output in Verbose mode, you will see the fields that are automatically extracted. If jobId is a field that is automatically extracted, then you should write a basic search that looks for all the jobIds you want - you tried to do that with your rex statement, but you actually included the text "jobId:..." in the dynamic_text, you actually want the jobId data without "jobId:". As @isoutamo says, if jobId is NOT auto-extracted, then use spath to get it and then do the stats on the jobId, e.g. this is the SUBSEARCH - which if you run it on its own will return a single field called jobId with all the jobIds you want.   index="<indexname>" source = "user1" OR source = "user2" where "<ProcessName>" "Exception occurred" | spath Properties.jobId ``` This uses spath to extract the jobId ``` | search Properties.jobId!=null | stats values(jobId) AS jobId ]   Then use this as the subsearch to the outer search and it will then find all records that have a jobId matching the ones you are selecting. Note that if your jobId is NOT auto extracted, then you cannot make a search for jobId=X, so you will need to either configure Splunk to auto extract the JSON or create a calculated field with this type of expression   | eval jobId=spath(_raw, "Properties.jobId")   which will mean jobId will always be a field in your data for search, so you won't have to use the spath expression in your search
What do you get when you try something like this? index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directl... See more...
What do you get when you try something like this? index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex field=_raw "trade_id=\"(?<NB>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" ``` Parse mx_to_sky events ``` | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" ``` Reduce to just the fields of interest ``` | fields sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ``` "Join" events by NB using stats ``` | stats values(*) as * by NB
Hi      I have deployed Splunk enterprise and my logs are getting ingested into the indexer. Now i have created an app for enriching the logs with additional fields from a csv file. I have deployed ... See more...
Hi      I have deployed Splunk enterprise and my logs are getting ingested into the indexer. Now i have created an app for enriching the logs with additional fields from a csv file. I have deployed the app by making configuration changes in props.conf and transforms.conf and i am able to view search time enrichment. But my requirement is real time enrichment as my csv file would change every 2 days. Can anyone provide a sample configuration for props.conf and transforms.conf for real time enrichment of logs with fields from csv based on match with one of the fields of the logs. Regards
  Hello everyone, I’m trying to send SPAN traffic from a single interface (ens35) to Splunk Enterprise using the Splunk Stream forwarder in independent mode. The Splunk Stream forwarder and the sea... See more...
  Hello everyone, I’m trying to send SPAN traffic from a single interface (ens35) to Splunk Enterprise using the Splunk Stream forwarder in independent mode. The Splunk Stream forwarder and the search head appear to be connected properly, but I’m not seeing any of the SPAN traffic in Splunk. In the stmfwd.log, I see the following error: (CaptureServer.cpp:2032) stream.CaptureServer - NetFlow receiver configuration is not set in streamfwd.conf. NetFlow data will not be captured. Please update streamfwd.conf to include correct NetFlow receiver configuration. However, I’m not trying to capture NetFlow data; I only want to capture the raw SPAN traffic. Here is my streamfwd.conf: [streamfwd] httpEventCollectorToken = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx indexer.1.uri = http://splunk-indexer:8088 indexer.2.uri = http://splunk-indexer2:8088 streamfwdcapture.1.interface = ens35 Why is the SPAN traffic not being forwarded to Splunk? How can I configure Splunk Stream properly so that it captures and sends the SPAN traffic to my indexers without any NetFlow setup? Thank you!
As @bowesmana says, map is generally not suitable for what you are trying to do.  Instead of illustrating an imagined SPL snippet for volunteers to read your mind, it is better to ask yourself, and i... See more...
As @bowesmana says, map is generally not suitable for what you are trying to do.  Instead of illustrating an imagined SPL snippet for volunteers to read your mind, it is better to ask yourself, and illustrate: What is a meaningful dataset to illustrate my problem? Action: Illustrate said dataset using text. (Screenshot does not apply.  Anonymize as needed.) What is the information I am trying to obtain?  Action: Illustrate your desired output based on the dataset. What is the logic between my sample dataset and desired output?  Use plain language, not SPL.  Make your intention clear in logical terms.  Use common mathematical/logical symbols if you like, but not SPL if you have any doubt about your code. If you illustrate some SPL that does not give you desired output, also illustrate actual results from the sample dataset.  Then, explain why the result differs from desired output unless the reason is painfully obvious. Before I try to read your mind, let me point out one critical point you need to clarify - I will use your "first search" to exemplify.  Do you try to search for events with terms "<ProcessName>" and "Exception occurred" only in source=user2, then all events from source=user1? Because that's what your first search does.  Your second search has the same logic, therefore IF that map command works, events in source=user1 will always match.  Is this really your intention? I have a high suspicion that you want to search for events with terms "<ProcessName>" and "Exception occurred" in either source=user1 or source=user2.  Is this correct?  I will assume so in the following. This being said, based on the screenshot snippet you shared, you don't need to use regex or even spath to extract jobId because Splunk has clearly done that for you.  The field name is Properties.jobId.  All you need to do is to match this field. In other words, given these 8 simplified events:   source _raw 1 user1 {"Level": "Error", "MessageTemplate": "Exception occurred - something something", "Properties": { "jobId": "8ef3e2f8-35c4-4f0a-8553-cffd718640b", "message": "<ProcessNotName2> Exception occurred - Exception Source: System.Activities stuff, stuff" } } 2 user1 {"Level": "Error", "MessageTemplate": "Exception occurred - something more", "Properties": { "jobId": "8ef3e2f8-2903-4f0a-8553-cffd718640b", "message": "<ProcessName> Exception occurred - Exception Source: System.Activities stuff, stuff" } } 3 user1 {"Level": "Info", "MessageTemplate": "Exception did not occurr - something else", "Properties": { "jobId": "8ef3e2f8-1234-4f0a-8572-cffd718640b", "message": "Exception won't happen - blah" } } 4 user1 {"Level": "Info", "MessageTemplate": "Not exception - something else", "Properties": { "jobId": "8ef3e2f8-5678-4f0a-8553-cffd718640b", "message": "Nothing to see here - don't worry" } } 5 user2 {"Level": "Error", "MessageTemplate": "Exception occurred - something more", "Properties": { "jobId": "8ef3e2f8-35c4-4f0a-8553-cffd718640b", "message": "Exception occurred - Exception Source: System.Activities stuff, stuff" } } 6 user2 {"Level": "Error", "MessageTemplate": "Exception occurred - something something", "Properties": { "jobId": "8ef3e2f8-2903-4f0a-8553-cffd718640b", "message": "Exception occurred - Exception Source: System.Activities stuff, stuff" } } 7 user2 {"Level": "Info", "MessageTemplate": "Exception did not occurr - something else", "Properties": { "jobId": "8ef3e2f8-2903-4f0a-8572-cffd718640b", "message": "Exception won't happen - blah" } } 8 user2 {"Level": "Info", "MessageTemplate": "Not exception - something else", "Properties": { "jobId": "8ef3e2f8-2903-4f0a-8553-cffd718640b", "message": "Nothing to see here - don't worry" } } you want to select 2, 6, and 8. This is the search to use:   index="<indexname>" (source = "user1" OR source = "user2") [ search index="<indexname>" (source = "user1" OR source = "user2" ) "<ProcessName>" "Exception occurred" | stats values(Properties.jobId) AS Properties.jobId ]   This is the data emulation to generate the mock dataset posted above.  Play with it and compare with real data   | makeresults | eval data = mvappend( "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something something\", \"Properties\": { \"jobId\": \"8ef3e2f8-35c4-4f0a-8553-cffd718640b\", \"message\": \"<ProcessNotName2> Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something more\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"<ProcessName> Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Exception did not occurr - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-1234-4f0a-8572-cffd718640b\", \"message\": \"Exception won't happen - blah\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Not exception - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-5678-4f0a-8553-cffd718640b\", \"message\": \"Nothing to see here - don't worry\" } }" ) | mvexpand data | rename data AS _raw | spath | eval source = "user1" | append [| makeresults | eval data = mvappend( "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something more\", \"Properties\": { \"jobId\": \"8ef3e2f8-35c4-4f0a-8553-cffd718640b\", \"message\": \"Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something something\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Exception did not occurr - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8572-cffd718640b\", \"message\": \"Exception won't happen - blah\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Not exception - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"Nothing to see here - don't worry\" } }" ) | mvexpand data | rename data AS _raw | spath | eval source = "user2"] ``` the above emulates index="<indexname>" (source = "user1" OR source = "user2") ```   Using this emulation in both main search and subsearch, here is a full emulation:   | makeresults | eval data = mvappend( "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something something\", \"Properties\": { \"jobId\": \"8ef3e2f8-35c4-4f0a-8553-cffd718640b\", \"message\": \"<ProcessNotName2> Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something more\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"<ProcessName> Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Exception did not occurr - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-1234-4f0a-8572-cffd718640b\", \"message\": \"Exception won't happen - blah\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Not exception - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-5678-4f0a-8553-cffd718640b\", \"message\": \"Nothing to see here - don't worry\" } }" ) | mvexpand data | rename data AS _raw | spath | eval source = "user1" | append [| makeresults | eval data = mvappend( "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something more\", \"Properties\": { \"jobId\": \"8ef3e2f8-35c4-4f0a-8553-cffd718640b\", \"message\": \"Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something something\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Exception did not occurr - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8572-cffd718640b\", \"message\": \"Exception won't happen - blah\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Not exception - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"Nothing to see here - don't worry\" } }" ) | mvexpand data | rename data AS _raw | spath | eval source = "user2"] ``` the above emulates index="<indexname>" (source = "user1" OR source = "user2") ``` | search [makeresults | eval data = mvappend( "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something something\", \"Properties\": { \"jobId\": \"8ef3e2f8-35c4-4f0a-8553-cffd718640b\", \"message\": \"<ProcessNotName2> Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something more\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"<ProcessName> Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Exception did not occurr - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-1234-4f0a-8572-cffd718640b\", \"message\": \"Exception won't happen - blah\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Not exception - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-5678-4f0a-8553-cffd718640b\", \"message\": \"Nothing to see here - don't worry\" } }" ) | mvexpand data | rename data AS _raw | spath | eval index = "<indexname>", source = "user1" | append [| makeresults | eval data = mvappend( "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something more\", \"Properties\": { \"jobId\": \"8ef3e2f8-35c4-4f0a-8553-cffd718640b\", \"message\": \"Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Error\", \"MessageTemplate\": \"Exception occurred - something something\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"Exception occurred - Exception Source: System.Activities stuff, stuff\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Exception did not occurr - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8572-cffd718640b\", \"message\": \"Exception won't happen - blah\" } }", "{\"Level\": \"Info\", \"MessageTemplate\": \"Not exception - something else\", \"Properties\": { \"jobId\": \"8ef3e2f8-2903-4f0a-8553-cffd718640b\", \"message\": \"Nothing to see here - don't worry\" } }" ) | mvexpand data | rename data AS _raw | spath | eval source = "user2"] | search "<ProcessName>" "Exception occurred" ``` the above emulates index="<indexname>" (source = "user1" OR source = "user2") "ProcessName" "Exception occurred" ``` | stats values(Properties.jobId) as Properties.jobId ]   The output is these three events: source _raw user1 {"Level": "Error", "MessageTemplate": "Exception occurred - something more", "Properties": { "jobId": "8ef3e2f8-2903-4f0a-8553-cffd718640b", "message": "<ProcessName> Exception occurred - Exception Source: System.Activities stuff, stuff" } } user2 {"Level": "Error", "MessageTemplate": "Exception occurred - something something", "Properties": { "jobId": "8ef3e2f8-2903-4f0a-8553-cffd718640b", "message": "Exception occurred - Exception Source: System.Activities stuff, stuff" } } user2 {"Level": "Info", "MessageTemplate": "Not exception - something else", "Properties": { "jobId": "8ef3e2f8-2903-4f0a-8553-cffd718640b", "message": "Nothing to see here - don't worry" } }  
Thanks, as per my first post I'd like to join 2 searches together on NB and retain all columns . I am able to retain all columns but some rows are filled are some are not (but NB is definitely matchi... See more...
Thanks, as per my first post I'd like to join 2 searches together on NB and retain all columns . I am able to retain all columns but some rows are filled are some are not (but NB is definitely matching in both searches) index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | rename trade_id as NB | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | eval NB = tostring(trim(NB)) | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO
There is no such option because reports are sent unconditionally. If you wish to send email only when there are results then consider changing the report to an alert.
Hi @Jado95, Is your question specific to Splunk Add-on for Cisco ASA or Cisco ASA itself? The message format is defined by Cisco ASA, and the add-on implementation should agree with Cisco ASA docume... See more...
Hi @Jado95, Is your question specific to Splunk Add-on for Cisco ASA or Cisco ASA itself? The message format is defined by Cisco ASA, and the add-on implementation should agree with Cisco ASA documentation at https://www.cisco.com/c/en/us/td/docs/security/asa/syslog/b_syslog/syslog-messages-302003-to-342008.html: 302013 ... If inbound is specified, the original control connection was initiated from the outside. For example, for FTP, all data transfer channels are inbound if the original control channel is inbound. If outbound is specified, the original control connection was initiated from the inside. ... 302015 ... If inbound is specified, then the original control connection is initiated from the outside. For example, for UDP, all data transfer channels are inbound if the original control channel is inbound. If outbound is specified, then the original control connection is initiated from the inside. The corresponding teardown events, 302014 and 302106, do not specify a direction, so without prior knowledge, the field extraction can't know which address is the initiator. If needed, you can correlate the events by the session_id field. This example is slow and ugly; it's only meant to demonstrate the correlation: | eventstats values(direction) as direction by session_id | eval src_ip_tmp=src_ip, dest_ip_tmp=dest_ip, src_ip=if(lower(vendor_action)=="teardown" && lower(direction)=="outbound", dest_ip_tmp, src_ip_tmp), dest_ip=if(lower(vendor_action)=="teardown" && lower(direction)=="outbound", src_ip_tmp, dest_ip_tmp) | fields - src_ip_tmp dest_ip_tmp  
Unfortunately we as community users cannot do anything for this. Time by time it could take even day or two to get this email.
I registered for the 14-day Free Trial of Splunk Cloud Platform. I registered my email address and verified it. I expected to receive an email entitled "Welcome to Splunk Cloud Platform" with c... See more...
I registered for the 14-day Free Trial of Splunk Cloud Platform. I registered my email address and verified it. I expected to receive an email entitled "Welcome to Splunk Cloud Platform" with corresponding links from which to access and use a trial version of Splunk Cloud. That email never arrived after several hours. No evidence of it exists either in my "Spam" or "Trash" folders of my inbox. Please look into this and advise. Thanks! -Rolland
Hello, I have a report scheduled every week and the results are exported to pdf's. Is there an option to NOT email if no results are found because sometimes these PDF's have nothing in them.   Tha... See more...
Hello, I have a report scheduled every week and the results are exported to pdf's. Is there an option to NOT email if no results are found because sometimes these PDF's have nothing in them.   Thanks
I have the Splunk Add-on for Google Cloud Platform set up on an IDM server.  I am currently on version 4.4 and have inputs set up already from months ago however, I am trying to send more data to spl... See more...
I have the Splunk Add-on for Google Cloud Platform set up on an IDM server.  I am currently on version 4.4 and have inputs set up already from months ago however, I am trying to send more data to splunk but for some reason the inputs page does not load anymore.  The connection seems to be fine as I am still receiving expected data from my previous inputs but now when i try to add another input I get the following error: Failed to Load Inputs Page This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page. Details AxiosError: Request failed with status code 500
I would use list() instead of values() to prevent removal of duplicates and wrap the product in exact() to prevent rounding errors: | makeresults format=csv data="value_a 0.44 0.25 0.67 0.44" | sta... See more...
I would use list() instead of values() to prevent removal of duplicates and wrap the product in exact() to prevent rounding errors: | makeresults format=csv data="value_a 0.44 0.25 0.67 0.44" | stats list(value_a) as value_a | eval "product(value_a)"=1 | foreach value_a mode=multivalue [ eval "product(value_a)"=exact('product(value_a)' * <<ITEM>>) ] | table "product(value_a)"  => product(value_a) 0.032428
Hi @madhav_dholakia, In Simple XML, you can generate categorical choropleth maps with color-coded city boundaries: In Dashboard Studio, however, choropleth maps are limited to numerical distrib... See more...
Hi @madhav_dholakia, In Simple XML, you can generate categorical choropleth maps with color-coded city boundaries: In Dashboard Studio, however, choropleth maps are limited to numerical distributions, e.g. OpenIssues by city, and the geospatial lookup geometry isn't always interpreted correctly: In both examples, I've used mapping data published by the Office for National Statistics at https://geoportal.statistics.gov.uk. Search for BDY_TCITY DEC_2015 to download the corresponding KML file. As a compromise, you can use a marker map to display color-coded markers at city centers by latitude and longitude: Here's the source: { "visualizations": { "viz_KxsdmDQb": { "type": "splunk.map", "options": { "center": [ 52.560559999999924, -1.4702799999984109 ], "zoom": 6, "layers": [ { "type": "marker", "dataColors": "> primary | seriesByName('Status') | matchValue(colorMatchConfig)", "choroplethOpacity": 0.75, "additionalTooltipFields": [ "Status", "StoreID", "City", "OpenIssues" ], "latitude": "> primary | seriesByName('lat')", "longitude": "> primary | seriesByName('lon')" } ] }, "context": { "colorMatchConfig": [ { "match": "Dormant/Green", "value": "#118832" }, { "match": "Warning/Amber", "value": "#cba700" }, { "match": "Critical/Red", "value": "#d41f1f" } ] }, "dataSources": { "primary": "ds_CtvaIPJ3" } } }, "dataSources": { "ds_CtvaIPJ3": { "type": "ds.search", "options": { "query": "| makeresults format=csv data=\"StoreID,City,OpenIssues,Status,lat,lon\r\nStore 1,London,3,Critical/Red,51.507222,-0.1275\r\nStore 2,York,2,Warning/Amber,53.96,-1.08\r\nStore 3,Bristol,0,Dormant/Green,51.453611,-2.5975\r\nStore 4,Liverpool,1,Warning/Amber,53.407222,-2.991667\" \r\n| table StoreID City OpenIssues Status lat lon", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Choropleth map search" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": {} } } } }, "inputs": {}, "layout": { "type": "absolute", "options": { "width": 918, "height": 500, "display": "auto" }, "structure": [ { "item": "viz_KxsdmDQb", "type": "block", "position": { "x": 0, "y": 0, "w": 918, "h": 500 } } ], "globalInputs": [] }, "description": "", "title": "eaw_store_status_ds" } As a static workaround, the Choropleth SVG visualization allows you to upload a custom image, e.g. a stylized map of England and Wales, and define custom SVG boundaries and categorical colors. The Dashboard Studio documentation includes a basic tutorial at https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/mapsChorSVG.
N/A
So UF recognize the change of time when it writes this message into log, but its scheduler didn’t understand it correctly to run next round at correct time. Definitely time to create splunk support case.