All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ramkyreddy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @blkscorpio, why are you speaking of UF? an HF has all the UF's features, so you don't need another instance on the same machine. You can setup your HF to forward logs to Indexers in [Settings ... See more...
Hi @blkscorpio, why are you speaking of UF? an HF has all the UF's features, so you don't need another instance on the same machine. You can setup your HF to forward logs to Indexers in [Settings > Faowarding and Receiving > Forwarding]. Ciao. Giuseppe
How Do I get the instance for heavy forwader for my vm box for set up. Is the UF instance differ from HF instance?  
Hello sir/madam, I tried to migrate data from Events services 4.5.x to 23.x and I followed steps mentioned in https://docs.appdynamics.com/appd/onprem/23.2/en/events-service-deployment/data-migratio... See more...
Hello sir/madam, I tried to migrate data from Events services 4.5.x to 23.x and I followed steps mentioned in https://docs.appdynamics.com/appd/onprem/23.2/en/events-service-deployment/data-migration-from-events-service-4-5-x-to-23-x/migrate-events-service-from-version-4-5-x-to-23-x. When I finished those steps, an error occured in the log files . After that, I submited the new events service in  admin.jsp and then some data lost. You can observe the thrown exception in the attahced image. @Ryan.Paredez  I look forward to hearing from you. Regards Hamed
The answer is in the planning dept basement behind a locked door with the sign "beware of the leper". seriesColorsByField object n/a Specify the colors used for specific pie slice labels. For ... See more...
The answer is in the planning dept basement behind a locked door with the sign "beware of the leper". seriesColorsByField object n/a Specify the colors used for specific pie slice labels. For example: {"April": "#008000", "May": "#FFA500 Much of the pie documentation does not have the current option. I found it once on the splunk documentation for dashboard studio pie chart but could not find it again. I then searched for "splunk pie chart options dashboard studio code" and found the following url: https://docs.splunk.com/Documentation/Splunk/9.1.1/DashStudio/objOptRef
Use chart. index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | bucket _time span=1d | eval _time = strftime(_time, "%F") | chart sum(GB) over h by ... See more...
Use chart. index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | bucket _time span=1d | eval _time = strftime(_time, "%F") | chart sum(GB) over h by _time
We are using Microsoft Graph Security API Add-On to ingest all security alerts for our org using the Microsoft Graph Security API. Recently access token became invalid and the data paused. We renewed... See more...
We are using Microsoft Graph Security API Add-On to ingest all security alerts for our org using the Microsoft Graph Security API. Recently access token became invalid and the data paused. We renewed the secret key and updated with new one. Till then data ingestion is paused and no backfill happened after that. Is there any way I can backfill that paused one week data. Any help or point of contacts or link to read would be beneficial? Is that something to do in source end to re-ingest the data via Add-On again?
Hi there, I have this query: index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | timechart span=1d sum(GB) by h This query shows results like th... See more...
Hi there, I have this query: index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | timechart span=1d sum(GB) by h This query shows results like this: _time host1 .... 2023-11-10     2023-11-11     ...       And I want results like this: Host 2023-11-10 .... host1     host2     ...       How I can do this?
Scatter plots are two-dimensional so they take 3 arguments: value, x-axis, and y-axis.  A fourth argument would call for a three-dimensional chart and that calls for an add-on like https://splunkbase... See more...
Scatter plots are two-dimensional so they take 3 arguments: value, x-axis, and y-axis.  A fourth argument would call for a three-dimensional chart and that calls for an add-on like https://splunkbase.splunk.com/app/3138.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ScatterChart
Hello @Manami , We are experiencing the same thing with Splunk Enterprise, Memory utilization on average went up ~30% and CPU load over 50% across the indexing tier when we moved to this version. I ... See more...
Hello @Manami , We are experiencing the same thing with Splunk Enterprise, Memory utilization on average went up ~30% and CPU load over 50% across the indexing tier when we moved to this version. I will let you know if anything is found with the recent case which was opened. Were you able to find the problem with the universal forwarder?  
Hello, I have a problem, how do I get a fourth value from a table and scatterplot to use the value in a token? We have a table with 4 columns and a scatterplot chart for display. article, value, ... See more...
Hello, I have a problem, how do I get a fourth value from a table and scatterplot to use the value in a token? We have a table with 4 columns and a scatterplot chart for display. article, value, calculate category and PartnerId Unfortunately, the PartnerId is not displayed in the scatterplot chart. Can I somehow read out the fourth value to display further details on a PartnerID in a dashboard?
Thanks a lot! This was very helpful and exactly what I needed. I appreciate you sharing the documentation links as well, been reading through it. 
As far as I know, no volunteer here possesses mind-reading superpower.  If you want concrete help, illustrate (in text) relevant data input (anonymize as needed but preserver key characteristics), il... See more...
As far as I know, no volunteer here possesses mind-reading superpower.  If you want concrete help, illustrate (in text) relevant data input (anonymize as needed but preserver key characteristics), illustrate (in text) desired output - you already did, then explain the logic to arrive at result from input. If you have a field called "month" with values "August", "Sept", "Oct", and have a field named country with values "AUS", "JER", "IND", "ASI", this search will give you a semblance of what you illustrated. | chart count over month by country  
The command you are looking for is eventstats. index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country... See more...
The command you are looking for is eventstats. index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode, errorinfo | eventstats sum(Total) as Total by country | fields country, ProductCode, type, Failed_Count, Passed_Count, errorinfo, Total  
Sometimes, running the same search generates different orders when trellis visualization is used.  For example,   ((sourcetype=A field1=*) OR (sourcetype=B user=* field2=*)) clientip=* earliest="0... See more...
Sometimes, running the same search generates different orders when trellis visualization is used.  For example,   ((sourcetype=A field1=*) OR (sourcetype=B user=* field2=*)) clientip=* earliest="01/24/2023:02:00:00" latest="01/24/2023:08:00:00" | fields clientip user field1 field2 | eval user = mvindex(split(user, ":"), 1) | eventstats values(user) as user by clientip | eval clientip = clientip . if(isnull(user), "/", "/" . mvjoin(user, ",")) | timechart span=5m limit=19 count(field1) as s1 count(field2) as s2 by clientip   Here, field1 only exist in sourcetype A, user and field2 only exist in sourcetype B; search period is fixed in the past.  This means that search result cannot change.  But the following are two screenshots of two consecutive executions.   They show the same number of trellis with exact same clientip titles; each clientip's graph is also the same across the two runs.  But obviously the order is rearranged. (In Statistics view, columns are arranged in lexicographic order of "s1:clientip" and "s2:clientip".) Is there some way to be certain of the order?
Hi This is doable. You could e.g. add several time pickers on your dashboards and add those to your queries as tokens. I cannot recall now if this needs you to add additional tokens to set those limi... See more...
Hi This is doable. You could e.g. add several time pickers on your dashboards and add those to your queries as tokens. I cannot recall now if this needs you to add additional tokens to set those limits correctly in your search? r. Ismo
Please illustrate/mock the data.  Without knowing the actual data structure, it is impossible to know the relationship and your true intention.
The log line is about 9999 characters long with spaces, and not all the log line is ingested - I think i need to create a limits.conf file?  Absolutely.  Good data is the only guarantee that any... See more...
The log line is about 9999 characters long with spaces, and not all the log line is ingested - I think i need to create a limits.conf file?  Absolutely.  Good data is the only guarantee that any work on it will be valid. This said, Splunk's KV extraction does not look beyond the first occurrence of key. (And that's a good thing.  It is a risky proposition for any language to assume the intention of multiple occurrences of a left-hand side value.) The main problem is caused by the developers, who take pains to invent a structured data that is not standard.  It seems that they use foo[] to indicate an array (events), then use bar() to indicate an element; inside element, they use = to separate key and value.  Then, on top of this, they use geez() to signal a top level structure ("VUpdate") with key-value pairs that includes the events[] array.  If you have any influence over developers, you should urge them, beg them, implore them to use a standard structured representation such as JSON. If not, you can use Splunk to try to parse out the structure.  But this is going to be messy and will never be robust.  Unless your developers swear on their descendants' descendants (and their ancestors' ancestors) not to change format, you future can be ruined at their whim. Before I delve into SPL, I also want to clarify this: Splunk already give you the following fields: channelCode, contentType, duration, eventNumber, eventTitle, events, onAir, system, type, and utcStartDateTime.  Is this correct?  While you can ignore any second level fields such as eventTitle and eventNumber, I also want to confirm that events includes the whole thing from [ all the way to ].  Is this correct? I'll suggest two approaches, both rely on the structure I reverse engineered above.  The first one is straight string manipulation, and uses Splunk's split function to isolate individual events.   | fields system channelCode channelCode type events | eval events = split(events, "),") | mvexpand events | rename events AS _raw | rex mode=sed "s/^[\[\s]*Event\(// s/[\)\]]//g" | kv kvdelim="=" pairdelim=","   The second one tries to "translate" your developers's log structure into JSON using string manipulation.   | rex field=events mode=sed "s/\(/\": {/g s/ *\)/}}/g s/=\s+/=/g s/\s+,/,/g s/(\w+)=([^,}]+)/\"\1\": \"\2\"/g s/\"(true|false)\"/\1/g s/Event/{\"Event/g" | spath input=events path={} | fields - events | mvexpand {} | spath input={} | fields - {} | rename Event.* As *   The second approach is not more robust; if anything, it is less.  But it better illustrates the perceived structure.  Either way, your sample data should give you something like channelCode contentType duration eventNumber eventTitle onAir system type utcStartDateTime UH Prog 00:00:05.000 725538339 BooRadley true GRP1-VIPE NextEvents 2023-11-17T15:42:10.160Z UH Bumper 00:00:02.000 725538313 REGGAE-2 false GRP1-VIPE NextEvents 2023-11-17T15:42:15.160Z UH Commercial 00:01:30.000 725538320 CHRISITAN MISSION false GRP1-VIPE NextEvents 2023-11-17T15:42:17.160Z This is an emulation you can play with and compare with real data   | makeresults | eval _raw = "20231117154211 [18080-exec-9] INFO EventConversionService () - SArts: VUpdate(system=GRP1-VIPE, channelCode=UH, type=NextEvents, events=[Event(onAir=true, eventNumber=725538339, utcStartDateTime=2023-11-17T15:42:10.160Z, duration=00:00:05.000, eventTitle=BooRadley, contentType=Prog ), Event(onAir=false, eventNumber=725538313, utcStartDateTime=2023-11-17T15:42:15.160Z, duration=00:00:02.000, eventTitle= REGGAE-2, contentType=Bumper), Event(onAir=false, eventNumber=725538320, utcStartDateTime=2023-11-17T15:42:17.160Z, duration=00:01:30.000, eventTitle=CHRISITAN MISSION , contentType=Commercial)])" | extract ``` data emulation above ```   Hope this helps.
Greetings, I have Splunk 9.1.1 trying to import an aruba 7210 into splunk using the aruba app with udp 514. Sourcetype: aruba:syslog. I have other devices (cisco) going into the same splunk instance ... See more...
Greetings, I have Splunk 9.1.1 trying to import an aruba 7210 into splunk using the aruba app with udp 514. Sourcetype: aruba:syslog. I have other devices (cisco) going into the same splunk instance and they are reporting ok.  the splunk server can ping the aruba and vice versa.  should i try any other source types?  anything else I should look for to search under the hood to see why communication is not occurring? Thank you,
I have a use case that requires logging to be captured and have following this document here: How do I set up the ForgeRock Identity Cloud app for Splunk? Which references --> https://splunkbas... See more...
I have a use case that requires logging to be captured and have following this document here: How do I set up the ForgeRock Identity Cloud app for Splunk? Which references --> https://splunkbase.splunk.com/app/6272 ForgeRock Identity Cloud App for Splunk captures audit and debug logs from ForgeRock Identity Cloud tenants. A sample dashboard is included to graphically illustrate various captured metrics, for example, authentication events, identity registrations, and top-active users. Sample searches are also included to extend or modify the sample dashboard. Problem is the app should not be calling the following endpoint: /monitoring/logs/tail  It should be calling the following endpoint as noted in the ForgeRock Product Documentation-> /monitoring/logs To reduce unwanted stresses on the system, Identity Cloud limits the number of requests you can make to the /monitoring/logs endpoint in a certain timeframe: The page-size limit is 1000 logs per request. The request limit is 60 requests per minute. The theoretical upper rate limit is therefore 60,000 logs per minute. The reason this needs to be changed is when  using the Logs tail endpoint The /monitoring/logs/tail endpoint has the same limits and response headers as the /monitoring/logs endpoint described above. However, the endpoint also has a limit of 20,000 lines per request, which supersedes the page-size limit of 1000 logs per request. Because calls to the /monitoring/logs/tail endpoint do not always fetch all logs, use this endpoint for debugging only. Use the /monitoring/logs endpoint when you need to fetch all logs. I did find: grep -i -R "/tail" forgerock/   Which pointed me to :   forgerock//bin/input_module_forgerock.py:        response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs/tail", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False)   Lines 51-52 of input_module_forgerock.py shows:   # The following examples send rest requests to some endpoint. response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs/tail", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False) I suspect updating this to the following /monitoring/logs may resolve this and restarting the app:   # The following examples send rest requests to some endpoint. response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False) But when trying to grab logs its failing: 2023-11-16 15:33:34,178 DEBUG pid=261576 tid=MainThread file=connectionpool.py:_make_request:461 | https://openam-testxyz.id.forgerock.io:443 "GET /monitoring/logs?source=am-authentication%2Cam-access%2Cam-config%2Cidm-activity&_pagedResultsCookie=eyJfc29ydEzbnRpY25Il19fQ HTTP/1.1" 500 74 2023-11-16 15:33:34,179 INFO pid=261576 tid=MainThread file=base_modinput.py:log_info:295 | Unexpected response from ForgeRock: 500 2023-11-16 15:33:34,179 ERROR pid=261576 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Login Traceback (most recent call last): File "/opt/splunk/etc/apps/forgerock/bin/forgerock/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/forgerock/bin/forgerock.py", line 76, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/forgerock/bin/input_module_forgerock.py", line 60, in collect_events response.raise_for_status() File "/opt/splunk/etc/apps/forgerock/bin/forgerock/aob_py3/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) Hoping someone has an idea @jknight