All Topics

Top

All Topics

Hello community members, Has anyone successfully integrated the Backbase fintech product with Splunk for logging and monitoring purposes? If so, could you share your insights, experiences, and any t... See more...
Hello community members, Has anyone successfully integrated the Backbase fintech product with Splunk for logging and monitoring purposes? If so, could you share your insights, experiences, and any tips on how to effectively set up and maintain this integration? Thank you in advance for your help!
we need to set up an alert if a server no java process for 15mins, only one alert was sent until the issue was solved. Do we need to create 2 windows for this? | mstats count(os.cpu.pct.used) as c w... See more...
we need to set up an alert if a server no java process for 15mins, only one alert was sent until the issue was solved. Do we need to create 2 windows for this? | mstats count(os.cpu.pct.used) as c where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) by  host_ip | join host type=left     [| mstats avg(ps_metric.pctMEM) as avg_mem_java avg(ps_metric.pctCPU) as avg_cpu_java count(ps_metric.pctMEM) as ct_java_proc where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) sourcetype=ps_metric COMMAND=java by host host_ip COMMAND USER ] | fields - c | eval is_java_running = if(ct_java_proc>0, 1, 0)
Hello all, I send some logs from multiple endpoints to a standalone Splunk HTTP Event Collector. Many logs are sent successfully but some of them (same index, same endpoint, ...) . But some of them ... See more...
Hello all, I send some logs from multiple endpoints to a standalone Splunk HTTP Event Collector. Many logs are sent successfully but some of them (same index, same endpoint, ...) . But some of them get 403 when sending and are not sent. I think maybe it's for threads or sockets. Any ideas are appreciated
I want to calculate the Percentage of status code for 200 out of Total counts of Status code by time. I have written query as per below by using append cols. Below query is working but it is not givi... See more...
I want to calculate the Percentage of status code for 200 out of Total counts of Status code by time. I have written query as per below by using append cols. Below query is working but it is not giving percentage every minute or by _time wise. I want this Percentage of status code for 200 by _time also. So can anybody help me out on this how to write this query. index=* sourcetype=* host=* | stats count(sc_status) as Totalcount | appendcols [ search index=* sourcetype=* host=* sc_status=200 | stats count(sc_status) as Count200 ] | eval Percent200=Round((Count200/Totalcount)*100,2) | fields _time Count200 Totalcount Percent200
Hi all, I am trying to authenticate a user against REST API but when testing via CURL, it is failing when using LB URL(F5). User has replicated across all SHC members and can login via UI. # curl -... See more...
Hi all, I am trying to authenticate a user against REST API but when testing via CURL, it is failing when using LB URL(F5). User has replicated across all SHC members and can login via UI. # curl -k https://Splunk-LB-URL:8089/services/auth/login -d username=user -d password='password' <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN" code="incorrect_username_or_password">Login failed</msg> </messages> </response>   But when I try this same against the SH member directly, it works. # curl -k https://Splunk-SearchHead:8089/services/auth/login -d username=user -d password='password' <response> <sessionKey>gULiq_E7abGyEchXyw7rxzwi83Fhdh8gIGjPGBouFUd37GuXF</sessionKey> <messages> <msg code=""></msg> </messages> </response>   Initially I thought it could be something on the LB side but then for "admin" user, LB URL works just fine.  # curl -k https://Splunk-LB-URL:8089/services/auth/login -d username=admin -d password='password' <response> <sessionKey>gULiq_E7abGyEchXyw7rxzwi83Fhdh8gIGjPGBouFUd37GuXF</sessionKey> <messages> <msg code=""></msg> </messages> </response>   Has anyone come across issue like this? Why would admin work fine on LB but a new local user works only against direct SH and not via load balancer? 
Hello guys Hope you are doing great! I want to configure a query, some guys are disabled in AD and also, in Splunk ES when i open the Identity Investigatior it is showing also a disabled (cn=*,ou=d... See more...
Hello guys Hope you are doing great! I want to configure a query, some guys are disabled in AD and also, in Splunk ES when i open the Identity Investigatior it is showing also a disabled (cn=*,ou=disabled,ou=united,ou=accounts,dc=global,dc=ual,dc=com) But in users it showing his role on under the roles but it should be need to sho as no_access,  Now I want build a query and create a alert Can you please help me on this  Ani
Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... =====================================... See more...
Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... ======================================== Strategy Abstract The strategy will function as follows: Utilize tstats to summarize SMB traffic data. Identify internal hosts scanning for open SMB ports outbound to external hosts. Technical Context This rule focuses on detecting abnormal outbound SMB traffic. =============================================================================== SPL is generating 0 errors but also 0 matches.    | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_ip) as dest_ip dc(All_Traffic.dest_ip) as unique_dest_ips values(All_Traffic.dest_port) as dest_port values(All_Traffic.action) as action values(sourcetype) as sourcetype     from datamodel=Network_Traffic.All_Traffic     where (All_Traffic.src_ip [inputlookup internal_ranges.csv | table src_ip] OR NOT All_Traffic.dest_ip [ inputlookup internal_ranges.csv | table dest_ip]) AND All_Traffic.dest_port=445     by _time All_Traffic.src_ip span=5m  | `drop_dm_object_name(All_Traffic)`  | where unique_dest_ips>=50 | search NOT [ | inputlookup scanners.csv | table ip | rename ip as src_ip] | search NOT src_ip = "x.x.x.x" | head 51      
  Watch on-demand    Prevent unplanned downtime with Splunk | Featuring Travelport Distributed ecosystems, tool sprawl, and high customer expectations.  Those are only a few of the ma... See more...
  Watch on-demand    Prevent unplanned downtime with Splunk | Featuring Travelport Distributed ecosystems, tool sprawl, and high customer expectations.  Those are only a few of the many challenges that make it increasingly more difficult for ITOps, engineers, and developers to fully track what happens in their environments. In this context, it can take too long to detect incidents, diagnose them, and resolve them before they impact customers. Find out how Travelport , a Splunk customer, leverages Splunk to address infrastructure and application issues with a unified and integrated observability approach. Learn about real-world troubleshooting and monitoring use cases and see how the combination of Splunk Cloud and Splunk Observability helps Travelport extract meaningful insights and take action promptly.  What you’ll learn: Perform real-time troubleshooting with logs and traces in a codeless environment Improve incident response journey with glass tables view of your IT environment Collect key insights in dashboards for better business performance analysis Who will benefit: Splunk Administrators, Frontend Engineers, Site Reliability Engineers, DevOps Engineers, Directors of Web and eCommerce , Director of UI / UX, Mobile App Developers, Platform Engineers. IT Operations Engineers, Systems Administrators and many more. About Travelport Travelport is a global technology company that powers bookings for hundreds of thousands of travel suppliers worldwide. Buyers and sellers of travel are connected by the company’s next generation marketplace, Travelport+, which simplifies how brands connect, upgrades how travel is sold, and enables modern digital retailing. Headquartered in the United Kingdom and operating in more than 165 countries around the world, Travelport is focused on driving innovation that simplifies the complex travel ecosystem.
Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return ... See more...
Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return one in my searches. The field name is "triggeredComponents{}.triggeredFilters{}.trigger.value" and it returns several values of different types, for example: 1 5 out text / text text text hostname1 hostname2 445 I only wish to retrieve and view the "text / text text text" value, and then pop that into a |stats command. Please can someone offer some advise? Many thanks in advance!
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. n... See more...
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. no hiccups. Using a select statement in the API request, I can gather specific fields. The more fields I define, the more issues I run into in Splunk. Basically it feels like the app is rate limited. I would expect it to run to just under an hour. It usually fails after 10 minutes and starts again at the 2 hour (7200 seconds) configured interval time on the input page. If I define fewer fields in the select request, it runs for a little longer but still ends up failing and obviously I'm not getting the data I want. If I set the bare minimum one field it runs for the expected time, stops, and starts again at its configured interval. I'm hesitant to say what platform but it is cloud based. I'm running my app from an on-prem heavy forwarder indexing to Splunk Cloud. The input interval config is 2 hours. The python script iterates through requests due to paging limitations and delays between requests based on some math I did with the total number of assets and pages. Its about 3 seconds between requests. But again, my code works flawlessly running in VisCo. That target API isn't rate limiting me due to the scripted interval. At least, I have no reason to believe that it is. I've opened a ticket with Splunk but I wanted to see if anyone else has experience with the Splunk Add-on Builder and the custom python modules.
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we... See more...
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we have a negative number, such as "-3" is there a way to make negative values red as well? Basically, anything that isn't zero should be red. Thanks for the help!
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role... See more...
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role or updates to an existing one, I found only action=edit_roles, but I can only know the associted user and not the changed role. Can anyone idicate an url to find Splunk audit information? Ciao. Giuseppe
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some H... See more...
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some Heavy Forwarders. Task: perform some field extractions Details: addon for parsing are already installed and configured, so we have not to create new ones, we should simply enrich/expand existing ones. Those addons are installed on both cloud components and HFs. The point is this: due we already have some addon for parsing, we could simply edit their props.conf and transforms.conf files; of course, due we have addon installed on both cloud components and HFs, we have to perform changes on all of them.  For example, performing addon editing only on cloud components with GUI Field Extraction imply that new fields will be parsed at index time on them, because they will be not pre parsed by HFs. Plus, we know that we should create a copy of those file on local folder, to avoid editing the default one, etcetera, etcetera, etcetera.  But, at the same time, for our SOC we created a custom app used as container to store all customizations performed by/for them, following one of Splunk best practice. We store there reports, alerts, and so on: with "we store there" I mean that, when we create something and choose an app context, we set our custom SOC one. With this choice, we could simply perform a field extraction with GUI and assign as app context our custom one; of course, with this technique, custom regex are saved only on cloud components and not on the HFs. So, my wondering is: when we speak about field extraction, if we consider that pre parsing performed by HF is desired but NOT mandatory, what is the best choice? Maintain all field extractions on addon or split between OOT one and custom one, using our custom SOC app?
All the modules under "Search Expert" do not have a launch button when clicked. Are these not available for veterans any longer?   
What are the prerequisites for Splunk Sales Engineer II? I think I need Splunk Sales Engineer I, but I heard from a friend that that certification has been discontinued, so I don't know.
Hello everyone,   I am still relatively new to Splunk. I would like to add an additionalTooltipField to my maps visualization, so that when you hover over a marker point, more data details about th... See more...
Hello everyone,   I am still relatively new to Splunk. I would like to add an additionalTooltipField to my maps visualization, so that when you hover over a marker point, more data details about the marker appear. I have formulated the following query: source="NeueIP.csv" host="IP" sourcetype="csv" | rename Breitengrad as latitude, L__ngengrad as longitude, Stadt as Stadt, Kurzbeschreibung as Beschreibung | eval CPU_Auslastung = replace(CPU_Auslastung, "%","") | eval CPU_Auslastung = tonumber(CPU_Auslastung) | eval CPU_Color = case( CPU_Auslastung > 80.0, "#de1d20", CPU_Auslastung > 50.0, "#54afda", true(), "#4ade1d" ) | table Stadt, latitude, longitude, Kurzbeschreibung, Langbeschreibung, CPU_Auslastung, CPU_Color | eval _time = now()     And I tried to adjust some things in the source code so that the additionalTooltipField appears. Last of all: "visualizations": {  "viz_map_1": {  "type": "splunk.map",  "options": {  "center": [  50.35,  17.36  ],  "zoom": 4,  "layers": [  {  "type": "marker",  "latitude": "> primary | seriesByName('latitude')",  "longitude": "> primary | seriesByName('longitude')",  "dataColors": ">primary | seriesByName(\"CPU_Auslastung\") | rangeValue(config)",  "additionalTooltipFields": ">primary | seriesByName(\"Stadt\")",  "markerOptions": {  "additionalTooltipFields": [  "Stadt",  "Kurzbeschreibung"  ] },  "hoverMarkerPanel": {  "enabled": true,  "fields": [  "Stadt",  "Kurzbeschreibung"  ]  }  }  ]  },   My sample data is as follows: Stadt, Breitengrad, Längengrad, Kurzbeschreibung, Langbeschreibung, CPU_Auslastung Berlin, 52.52, 13.405, BE, Hauptstadt Deutschlands, 45% London, 51.5074, -0.1278, LDN, Hauptstadt des Vereinigten Königreichs, 65% Paris, 48.8566, 2.3522, PAR, Hauptstadt Frankreichs, 78%     Is my plan possible?   Thanks for your help in advance!!  
Hi Team,   In role we are providing the user as read only access, and set up the capabilities, Inheritress , resources, and restrictions.  But that user able to delete the query and delete the repo... See more...
Hi Team,   In role we are providing the user as read only access, and set up the capabilities, Inheritress , resources, and restrictions.  But that user able to delete the query and delete the report also, how do we hide delete option in the report?   Please guide the process.        
 Hi,  Does anyone have experience in monitoring Azure Integration Services with AppDynamics? Suggestions on a setup would be appreciated. The services will be calling an on-premise .NET application ... See more...
 Hi,  Does anyone have experience in monitoring Azure Integration Services with AppDynamics? Suggestions on a setup would be appreciated. The services will be calling an on-premise .NET application the ability to drilldown downstream is not a must but would be really nice to have. br Kjell Lönnqvist
Hi All, I have a dashboard which has 3 radio buttons both,TypeA and TypeB. Also i have a table. The requirement is that, if i select both or TypeA in radio buttons, columnA and columnB in the table ... See more...
Hi All, I have a dashboard which has 3 radio buttons both,TypeA and TypeB. Also i have a table. The requirement is that, if i select both or TypeA in radio buttons, columnA and columnB in the table should be highlighted. If i select the TypeB, only columnA should be highlighted. How can i achieve this? I have tried using color palette expression like below. But no luck. Anyone have solution for this? <format type="color" field="columnA"> <colorPalette type="list">["#00FFFF"]</colorPalette> </format> <format type="color" field="columnB"> <colorPalette type="expression">if(match(Type,"TypeB")," ", "#00FFFF")</colorPalette> </format>